RTE - Replay Test Engine

User Manual


Chapter 1: Introduction

Welcome to the User Manual for Replay Test Engine, your regression and verification testing tool designed to enhance the quality and reliability of developments and configuration changes within the SAP software ecosystem. This manual is intended for SAP developers, Functional Consultants, QA testers, and Key Users involved in testing SAP program modifications or verifying the impact of configuration adjustments.

The dynamic nature of SAP systems, with frequent enhancements, bug fixes, customizations, and configuration updates, carries an inherent risk: changes intended to improve one area can inadvertently impact other, seemingly unrelated functionalities. Ensuring that modifications deliver their desired benefits without introducing new errors or regressing existing features is paramount for maintaining system stability, user trust, and business continuity. RTE directly addresses this critical challenge by providing a structured, efficient, and powerful framework for conducting comprehensive regression testing of your SAP programs and verification of configuration outcomes.

1.1 Purpose of RTE

The primary purpose of RTE is to empower all teams to verify that modifications made to SAP programs, or changes to system configuration, have not negatively impacted their output for specific, well-defined scenarios. It achieves this by allowing you to:

By systematically comparing "before" and "after" states, RTE helps you catch regressions early in the cycle, reducing the cost and effort associated with fixing issues later.

1.2 The RTE Workflow

At its heart, RTE operates on the principle of structured comparison, integrated directly into your development, configuration, and testing lifecycle. The typical workflow involves several key stages:

  1. Instrumentation: Identify the critical internal variables within your SAP program (custom or standard) whose state you need to monitor. This is achieved with minimal code intrusion, typically by adding a single line of ABAP code using static methods from the ZCL_RTE class (ZCL_RTE=>EXPORT_DATA). This can often be done via a simple enhancement, requiring only very basic ABAP knowledge. You can also optionally use methods like ZCL_RTE=>IN_RTE() for test-specific logic or ZCL_RTE=>COMBINE_DATA() to enrich data before export.
  2. Creating Reference Runs: Access the central RTE transaction (ZRTE_START) and use the "Run program" function. Execute your instrumented program before applying any changes (or with a known 'good' configuration), using specific program variants to ensure consistent results. Mark these initial executions as "reference runs". These runs encapsulate the expected, correct output for those scenarios.
  3. Implementing Program or Configuration Changes: Proceed with your development activities or configuration adjustments.
  4. Performing Comparisons: Return to ZRTE_START and use the "Compare runs" function. RTE offers several modes:
    • Re-run the modified program for a specific variant and compare it against its reference.
    • Re-run for all reference variants and compare each against its baseline.
    • Compare any two arbitrary historical runs.
  5. Analysing Results: RTE will highlight any discrepancies in output (raw data differences, structural changes, or detailed content variations if using iData comparison). If intentional structural changes were made to your data, or if you're performing a cross-check against a different program, RTE's advanced "iData parameters" allow for sophisticated data mapping (renaming fields, changing types, filtering rows, etc.) to enable meaningful comparison.
  6. Managing and Approving Runs: Use "Manage runs" to view or delete old test data. Crucially, after verifying that the output of a modified program or new configuration is correct, you can "Approve" the new runs within the comparison tool, promoting them to become the new reference baseline for future tests.

1.3 About This Manual

This manual guides you through all features of RTE. It assumes basic SAP navigation skills. While Chapter 2 discusses code, instrumenting standard programs for configuration checks is designed to be accessible even with minimal ABAP exposure. Key Users can often leverage RTE for programs already instrumented by developers or consultants.

Important Information

This box draws your attention to crucial details, prerequisites, or concepts that are essential for a comprehensive understanding or the successful execution of the procedures described. Careful review of this information is highly recommended.

Best Practice

The "Best Practice" box offers guidance and recommendations for optimal usage, efficiency, or adherence to established standards. Following these suggestions can lead to improved outcomes, more robust implementations, or a more streamlined workflow.

Warning

A "Warning" box serves to alert you to potential risks, common pitfalls, or actions that could result in errors, data loss, system instability, or other undesirable consequences. It is critical to heed these notices and proceed with caution to avoid potential issues.

Chapter 2: Defining Test Data

To enable RTE to perform comparisons, you first need to instruct it which data within an SAP program (custom or standard) should be captured during a test run. This is achieved by adding a small amount of code directly into the program you intend to test. The integration is designed to be straightforward, primarily using static methods from the global class ZCL_RTE, which means you can call them easily without needing to declare helper variables.

2.1 Exporting Data for Comparison

The core mechanism for identifying test data is the EXPORT_DATA method. By calling this method, you specify an internal variable (like an internal table or a structure) whose content RTE should save when the program is executed via the RTE tool.

The simplest way to use this is with a single line of code:

ZCL_RTE=>EXPORT_DATA( iv_var_name = 'LT_OUTPUT_TAB' i_data = <lt_tab> ).

Let's break down the parameters:

Example Placement:

You should place the EXPORT_DATA call at a point in your program's logic where the variable you want to test contains the final data relevant for your comparison. Often, this is near the end of the program or subroutine, after all processing and data preparation for that variable is complete.

Consider this typical example where data is exported just after being displayed using ALV:

  " ... preceding program logic ...
  lo_alv_table->display( ).

  ZCL_RTE=>EXPORT_DATA( iv_var_name = 'FINAL_ALV_DATA' i_data = gt_data ).
ENDFORM.

2.1.1 Instrumenting Standard SAP Programs

RTE can also be used to verify the impact of configuration changes. You can instrument standard SAP programs by adding an EXPORT_DATA call using SAP's Enhancement Framework. This is often a very simple process requiring minimal ABAP knowledge, making it accessible to Functional Consultants.

2.2 Manipulating Data for Testing

Sometimes, the raw data in your program variables might not be in the ideal format for comparison, or you might want to include additional context only during test runs. RTE provides helper methods to handle these situations without affecting the standard execution flow of your program.

2.2.1 Conditional Logic

To execute specific data preparation steps only when the program is being run as part of an RTE test, you can use the IN_RTE method.

You can wrap your test-specific data manipulation logic within an IF ZCL_RTE=>IN_RTE( ) EQ abap_true...ENDIF. block. This ensures that this code is completely bypassed during normal program operation.

2.2.2 Merging Data

A common requirement is to combine data from different sources before exporting it for comparison. For instance, you might want to add a specific field as an extra column to every row of an internal table being tested. The COMBINE_DATA method facilitates this. Note: This method supports adding fields from a structure or elementary variable to a base table or structure; combining two tables is not supported.

Example: Combining a Table with a System Field for Testing

Let's look at how to use IN_RTE and COMBINE_DATA together. Imagine you want to test the contents of table gt_usr, but for testing purposes, you also want each row to include the System ID (sy-sysid).

  IF ZCL_RTE=>IN_RTE( ) EQ abap_true.

    ZCL_RTE=>COMBINE_DATA(
      EXPORTING
        i_a_tab_or_str  = gt_usr    " Base table or strucure
*       it_a_fields     =           " Fields to be used in output
        i_b_str_or_elem = sy        " Structure or elemtary var to be added
        " Fields to be used in output
        it_b_fields     =  VALUE #( (  sign = 'I' option = 'EQ' low = 'SYSID' ) )
      IMPORTING
        e_tab           =  DATA(go_ref) " Combined table
    ).

    FIELD-SYMBOLS: <gt_tab> TYPE ANY TABLE.
    ASSIGN go_ref->* TO <gt_tab>.

    ZCL_RTE=>EXPORT_DATA( iv_var_name = 'GT_TAB_WITH_SYSID' i_data = <gt_tab> ). " Ensure unique IV_VAR_NAME

  ENDIF.

Explanation of the Example:

  1. Check Execution Mode: IF ZCL_RTE=>IN_RTE( ) EQ abap_true. ensures the combination logic runs only during RTE tests.
  2. Combine Data: ZCL_RTE=>COMBINE_DATA is called:
    • gt_usr is the base table.
    • sy structure provides the additional data.
    • it_b_fields specifically selects only the SYSID field from the sy structure to be added.
    • The result (a new table combining gt_usr fields and sy-sysid) is created, and go_ref points to it.
  3. Export: ZCL_RTE=>EXPORT_DATA is called using the field symbol. It exports the combined data during an RTE run under a distinct logical name.

By using these methods, you can precisely define and even adapt the data captured by RTE for effective regression testing, without impacting the program's behaviour for end-users.

2.2.3 Real-World Example

While the COMBINE_DATA method can be used for various data manipulations, a common and powerful use case is adding contextual information to a primary data table before exporting it for testing. This makes test results much easier to analyze, especially when dealing with data processed in loops or across different entities (like employees, materials, documents, etc.). This technique is also valuable for configuration verification, for instance, to check payroll results before and after a configuration change with an effective date.

Let's consider a scenario from SAP Payroll. Within the standard payroll driver, the EXPRT function processes the Results Table (RT) for each employee (PERNR) and payroll period (APER). For regression testing or configuration verification, simply exporting the RT table might not be sufficient, as you wouldn't immediately know which employee or period a specific RT entry belongs to when looking at the combined test results later.

The following code snippet, intended to be placed within the relevant part of the payroll logic (like function EXPRT, potentially via an enhancement for standard code), demonstrates how to use ZCL_RTE=>COMBINE_DATA twice to add both the employee number and the payroll period information as columns to the RT table, specifically for RTE test runs:

FIELD-SYMBOLS: <lt_tab>  TYPE ANY TABLE,
               <lt_tab2> TYPE ANY TABLE.


IF ZCL_RTE=>IN_RTE( ) EQ abap_true.

  " Step 1: Combine RT table with the Personnel Number (PERNR)
  ZCL_RTE=>COMBINE_DATA(
    EXPORTING
      i_a_tab_or_str  = rt[]      " Base table is RT
      i_b_str_or_elem = pernr     " Add data from PERNR structure
      " Select only the 'PERNR' field from the PERNR structure
      it_b_fields     = VALUE #( ( low = 'PERNR' sign = 'I' option = 'EQ' ) )
    IMPORTING
      e_tab           = DATA(l_out_data) " Output: RT fields + PERNR field
  ).

  ASSIGN l_out_data->* TO <lt_tab>. " Assign intermediate result

  " Step 2: Combine the intermediate table with the Payroll Period (APER)
  ZCL_RTE=>COMBINE_DATA(
    EXPORTING
      i_a_tab_or_str  = <lt_tab>   " Base table is the result from Step 1
      i_b_str_or_elem = aper      " Add data from APER structure
      " IT_B_FIELDS is omitted, so add ALL fields from APER structure
    IMPORTING
      e_tab           = DATA(l_out_data2) " Output: RT fields + PERNR field + APER fields
  ).

  ASSIGN l_out_data2->* TO <lt_tab>. " Assign final result

  " Step 3: Export the final combined table
  ZCL_RTE=>EXPORT_DATA(
    EXPORTING
      iv_var_name = 'RT_DATA'       " Logical name for the exported data
      i_data      = <lt_tab2>       " The table containing RT + PERNR + APER
  ).
ENDIF.

Explanation:

  1. IF ZCL_RTE=>IN_RTE( )...ENDIF. : This ensures the entire data combination logic executes only when the program is run via RTE, having no impact on regular payroll runs.
  2. First COMBINE_DATA Call :
    • Takes the current RT internal table (i_a_tab_or_str = rt[]).
    • Adds data from the PERNR structure (i_b_str_or_elem = pernr).
    • Crucially, it_b_fields specifies that only the field named 'PERNR' from the pernr structure should be added as a new column to each row of the RT table.
    • The result is a new table (referenced by l_out_data, assigned to <lt_tab>) containing all original RT fields plus a PERNR field.
  3. Second COMBINE_DATA Call :
    • Takes the intermediate table <lt_tab> (which already contains RT + PERNR) as the base (i_a_tab_or_str = <lt_tab>).
    • Adds data from the APER structure (i_b_str_or_elem = aper).
    • Since it_b_fields is not provided this time, all fields from the aper structure are added as new columns to the table.
    • The result is the final table (referenced by l_out_data2, assigned to <lt_tab2>) containing original RT fields, the PERNR field, and all fields from the APER structure.
  4. EXPORT_DATA Call : The final, enriched table <lt_tab2> is exported under the logical name 'RT_DATA'.

Outcome and Applicability:

By performing these combinations before the export, the 'RT_DATA' variable captured by RTE will contain not just the payroll results but also the associated employee number and period details directly within each row. This makes analyzing differences during comparison significantly easier, as the context is immediately apparent.

Universal Technique: While this specific example uses variables common in SAP Payroll (RT, PERNR, APER), the underlying technique is universally applicable. You can use ZCL_RTE=>IN_RTE and ZCL_RTE=>COMBINE_DATA in any SAP module to enrich your primary test data with relevant contextual information (like document numbers, material codes, company codes, dates, etc.) before exporting it with ZCL_RTE=>EXPORT_DATA, thereby enhancing the clarity and usefulness of your regression tests or configuration verifications.

2.3 Exporting Different Variable Types

Let's examine an example demonstrating how to export variables of different types (elementary variable, structure, and internal table) using ZCL_RTE=>EXPORT_DATA. This example also illustrates how RTE handles situations where the same logical variable name (IV_VAR_NAME) is exported multiple times within a single program execution, such as within a loop.

Consider the following ABAP code snippet:

DATA: lv_var TYPE string,
      ls_str TYPE t000,
      lt_tab TYPE TABLE OF usr02.

lv_var = 'Single variable'.
SELECT SINGLE * FROM t000 INTO @ls_str WHERE mandt = '000'.
SELECT * FROM usr02 INTO TABLE @lt_tab.

DO 3 TIMES.
  " Modify variables slightly in each loop iteration
  lv_var = lv_var && '__' && sy-index.
  ls_str-mtext = ls_str-mtext && '__' && sy-index.
  READ TABLE lt_tab ASSIGNING FIELD-SYMBOL(<ls_tab>) INDEX 1.
  IF sy-subrc EQ 0.
    <ls_tab>-accnt = sy-index. " Change only one field in the first row of the table
  ENDIF.

  " Export all three variables in each iteration
  ZCL_RTE=>EXPORT_DATA( iv_var_name = 'LV_VAR' i_data = lv_var ).
  ZCL_RTE=>EXPORT_DATA( iv_var_name = 'LS_STR' i_data = ls_str ).
  ZCL_RTE=>EXPORT_DATA( iv_var_name = 'LT_TAB' i_data = lt_tab ).

ENDDO.

In this code:

Viewing the Exported Data in RTE:

After executing this code via the "Run program" function in RTE, if you inspect the captured data for this run (as described in Chapter 4.4), you will observe the following:

  1. Elementary Variable (LV_VAR): RTE display of elementary variable 'LV_VAR' exported three times in a loop
    • Since LV_VAR is an elementary variable (string), its value is displayed in a single column with the generic header FIELD.
    • Notice the ZRTE_UNIQN column. This field acts as a unique identifier for each distinct call to ZCL_RTE=>EXPORT_DATA within the run for the same IV_VAR_NAME. Because EXPORT_DATA for "LV_VAR" was called three times (once per loop iteration), you see three rows, each with a different ZRTE_UNIQN value (1, 2, 3), reflecting the state of lv_var at the moment of each export.
  2. Structure (LS_STR): RTE display of structure 'LS_STR' exported three times, showing structure fields as columns
    • For the structure LS_STR, RTE displays the data using column headers that directly correspond to the field names of the T000 structure (e.g., MANDT, MTEXT, ORT01, etc.).
    • Similar to the elementary variable, the ZRTE_UNIQN column appears, again having distinct values (1, 2, 3) for each of the three times EXPORT_DATA was called for 'LS_STR', showing the state of the structure in each loop pass.
  3. Internal Table (LT_TAB): RTE display of internal table 'LT_TAB' exported three times, with ZRTE_UNIQN differentiating each export set
    • When viewing the internal table LT_TAB, the column headers correspond to the fields of the table's line type (USR02, e.g., MANDT, BNAME, ACCNT, etc.).
    • Here, the role of ZRTE_UNIQN becomes particularly clear. The internal table (lt_tab) contains multiple rows itself. RTE exports the entire table contents each time ZCL_RTE=>EXPORT_DATA is called for 'LT_TAB'.
    • Therefore, you will see multiple groups of rows in the display, each group corresponding to a single export call. The ZRTE_UNIQN value will be the same for all rows belonging to a single export call (one snapshot of the table), but it will differ between the export calls made in different loop iterations. In this example, all table rows exported during the first loop pass will have ZRTE_UNIQN = 1, all rows from the second pass will have ZRTE_UNIQN = 2, and all from the third pass will have ZRTE_UNIQN = 3. This allows you to distinguish the complete state of the table as it was captured at each specific export moment. You can also see the effect of the modification within the loop: the ACCNT field for the first user (DDIC in the screenshot) changes value (1, 2, 3) corresponding to the ZRTE_UNIQN identifier for that export set.

This example highlights how RTE handles different data types and preserves the state of variables even when exported multiple times under the same logical name within a single execution, using the ZRTE_UNIQN field to differentiate between these distinct export snapshots.

2.4 Data Storage

Internally, when ZCL_RTE=>EXPORT_DATA is called, the content of the provided variable (I_DATA) is serialized into a raw data format. This serialized representation is then saved persistently, within dedicated database tables managed by the RTE tool. For complex types like internal tables or structures, this process typically involves converting each row of the variable into its raw data equivalent, and this will result in each field or cell of the source variable being stored as a distinct record or part of a record in the RTE backend tables, along with metadata such as the run identifier, logical variable name, and export sequence (ZRTE_UNIQN).

Important Data Privacy (GDPR) Considerations:

It is crucial to understand that if the variables exported by ZCL_RTE=>EXPORT_DATA during a test run contain any Personal Data (as defined by GDPR or other applicable data privacy regulations, such as names, addresses, identification numbers, sensitive personal information, etc.), this personal data will be copied and stored redundantly within the RTE tool's backend database tables.

Always be mindful of the nature of the data you are exporting with RTE and ensure your usage aligns with all applicable data privacy requirements and organizational policies.

Chapter 3: The Central RTE Transaction

Once you have instrumented your program(s) by adding the necessary ZCL_RTE=>EXPORT_DATA calls (as described in Chapter 2), the next step is to interact with the RTE tool itself to create runs and perform comparisons. The primary access point for all RTE functionalities is the central transaction ZRTE_START.

3.1 Accessing the RTE Main Screen

To begin using the RTE tool's interactive features, navigate to the SAP transaction ZRTE_START. You will be presented with the main screen, titled "RTE - Replay Test Engine", which serves as the central hub for managing your regression testing and configuration verification activities.

Image of the RTE main screen with options to Run program, Compare runs, and Manage runs

3.2 Overview of Main Functions

The ZRTE_START transaction provides direct access to the core components of the RTE solution through three main options:

  1. Run program: This function is used to execute your instrumented SAP program (custom or standard) under specific conditions (usually with a predefined variant) and capture the data you marked for export using ZCL_RTE=>EXPORT_DATA. Each execution creates a "run" record within RTE, storing the captured data and associated context (like program name, variant, timestamp, user, and an optional description). These runs form the basis for later comparisons. You will typically use this to create your reference runs before making changes (code or configuration) and subsequent runs after making changes.
  2. Compare runs: This is the heart of the RTE tool. This function provides a powerful interface for comparing the data captured in different runs. You can compare a recent run against a reference run of the same program, compare two arbitrary runs, or even compare a run against the output of a different program (cross-program testing). This section offers various comparison modes and advanced data mapping capabilities to pinpoint differences effectively.
  3. Manage runs: This utility allows you to view, search for, and maintain the test runs that have been previously created. You can search for runs based on various criteria (like program name, variant, user, date, or description), preview the data captured within a specific run, and delete runs that are no longer needed.

3.3 Navigating This Manual

The subsequent chapters of this manual will delve into the detailed usage of each of these core functions ("Run program", "Compare runs", and "Manage runs"), explaining their features, options, and workflows step-by-step. You will typically start by creating runs using "Run program", and then analyze the results using "Compare runs".

Chapter 4: Creating Test Runs

The first step in the practical application of RTE is typically to create one or more "runs" of the program you intend to test. A run represents a single execution via RTE, during which the tool captures the data you designated using the ZCL_RTE=>EXPORT_DATA method. These saved runs are the essential building blocks for later comparisons.

To create a run, select the "Run program" option from the main ZRTE_START transaction screen (described in Chapter 3). This will navigate you to the "RTE: Run program" screen.

4.1 Specifying Run Parameters

On the "RTE: Run program" screen, you need to provide details about the execution you want to perform:

Image of the RTE: Run program screen with fields for Program, Variant, Description, and 'Is reference run?' checkbox

4.2 Handling Existing Reference Runs

Because only one reference run is allowed per program/variant pair, if you check the "Is reference run?" box and a reference run already exists for the specified Program and Variant, RTE will prompt you for confirmation:

Image of the Reference run confirmation pop-up asking Set this run as the ONLY reference run?

4.3 Executing the Run

Once you have filled in the parameters, execute the run (by pressing F8 or clicking the Execute icon). RTE will launch the specified program with the selected variant (or display the selection screen if no variant was provided). The program will execute, and the ZCL_RTE=>EXPORT_DATA calls within its code (or enhancement) will trigger RTE to capture the specified variables.

After the program execution finishes, RTE will display a summary screen showing the variables that were successfully exported during this run:

Image of the list of exported variables after a run, showing Calling Program and Variable name

This list shows the logical variable names (Variable column) you defined using the IV_VAR_NAME parameter in your ZCL_RTE=>EXPORT_DATA calls, along with the program they originated from.

Disclaimer: Handling Runtime Errors (Short Dumps). Please be aware that RTE executes the target program as is. If the program encounters a runtime error (short dump) during its execution initiated by RTE, the execution will terminate, and the run will likely not be saved completely or correctly. RTE does not include mechanisms to catch or handle these short dumps. This applies even if the runtime error is potentially caused by incorrect usage of ZCL_RTE methods (e.g., passing incompatible data types to EXPORT_DATA or COMBINE_DATA). Ensuring the stability of the program under test, including the correct implementation of RTE method calls (whether in custom code or enhancements), remains the responsibility of the developer initiating the run. Check transaction ST22 for dump details if this occurs.

4.4 Inspecting Captured Data

You can immediately inspect the data captured for any variable listed. Simply double-click on the row corresponding to the variable you want to view. RTE will then display the content of that variable (in read-only mode) as it was saved during the run.

Image of an example of displayed table content after double-clicking an exported variable

This allows for a quick verification that the expected data was captured correctly.

This run, along with the captured data, is now saved within the RTE system and can be used for comparisons (see Chapter 5) or managed via the "Manage runs" function (see Chapter 6).

Chapter 5: Comparing Test Runs

Having successfully instrumented your programs (Chapter 2) and captured baseline executions as runs (Chapter 4), we now arrive at the central purpose and most powerful aspect of the RTE tool: comparing these runs. This is where RTE truly shines, allowing you to analyze the effects of your code modifications or verify the outcomes of configuration changes, ensuring that enhancements deliver the intended value without introducing unintended side effects. This chapter will guide you through the different ways to initiate comparisons, the various types of analysis RTE can perform, and data mapping features that enable meaningful comparisons even when program structures evolve.

5.1 Understanding the Modes

We start the comparison on the "RTE: Compare runs" screen, where you must first decide how you want RTE to select the runs for comparison. RTE offers three distinct operational modes, each suited to different testing scenarios:

  1. Reference program/variant: Compare mode 'Reference program/variant' selected with Program and Variant input fields
    • Core Idea: This mode focuses on validating a single, specific test scenario after code or configuration changes. You pinpoint the exact Program and Variant you are interested in testing.
    • How it Works: RTE locates the unique reference run previously saved for this specific program/variant combination. Then, crucially, it triggers a fresh execution of the current version of the specified program (with its current code and system configuration) using that same variant. Finally, it compares the data exported during this new execution against the data stored within the historical reference run.
    • Prerequisite: A reference run must already exist for the chosen program and variant. If not, RTE will issue a notification, and no comparison can proceed for that specific selection. Convenient search helps are available, first for selecting the Program, and then for listing only the valid Variants associated with that program.
  2. Reference program all variants: Compare mode 'Reference program all variants' selected with Program input field
    • Core Idea: This mode offers a broader safety net, automatically testing all established reference scenarios for a given program in one go. You only need to specify the Program name.
    • How it Works: RTE scans its records to find every variant associated with the entered Program name for which a reference run has been previously created. For each of these identified variants, it performs the same process as the single variant mode: it re-executes the program with that variant and compares the new results against the corresponding reference run.
    • Typical Use Case: This is the ideal mode for comprehensive regression testing after program changes. It provides assurance that your modifications haven't broken functionality across any of the standard test scenarios you've established as references. It's thorough but may take longer if many reference variants exist.
  3. 2 runs: Compare mode '2 runs' selected with Run A and Run B ID input fields and search helps
    • Core Idea: This mode grants complete manual control, allowing you to select any two previously completed runs from the RTE history for a direct comparison.
    • How it Works: You explicitly provide the unique technical identifiers (Run IDs) for the two runs you wish to compare, designated as "Run A" and "Run B". RTE then retrieves the data stored for these specific runs and performs the comparison.
    • Selecting Runs: The Run IDs themselves are long, system-generated strings and not easily remembered. Therefore, RTE provides search helps for both the "Run A" and "Run B" fields. These search helps allow you to locate the desired runs using familiar criteria such as the Program name, Variant used, the Description you provided during run creation, the User who created the run, or the Date/Time of execution.

5.2 Practical Scenarios Setup

To make the comparison features easy to follow, we'll use a concrete example throughout this chapter. We'll base our examples on the sample program ZRTE_TST_R_USR, which is provided as part of the RTE tool package (you are encouraged to copy this program into your own Z*-namespace program to experiment alongside this manual). This program's core function is to display user information from the standard SAP table USR02. While simple, it's perfectly sufficient to demonstrate RTE's comparison capabilities.

Initial Preparation Steps:

  1. Variant Creation: Define and save three distinct variants for your ZRTE_TST_R_USR program:
    • ALL Selection Criteria: Leave all selection fields blank (this will select all users).
    • DS Selection Criteria: Restrict the user selection to include only DDIC and SAP*.
    • DDIC Selection Criteria: Restrict the user selection to include only DDIC.
  2. Enable RTE Data Export: Ensure that the necessary ZCL_RTE=>EXPORT_DATA statements are active within the source code of your ZRTE_TST_R_USR program. Uncomment the code block marked between <1> and </1> tags in the sample program. This action should configure the program to export the main internal table containing user data (logically named GT_TAB in the EXPORT_DATA call) and the selection criteria used (SO_BNAME). (Refer back to Chapter 2 for a detailed explanation of adding EXPORT_DATA calls).
  3. Establish the Baseline - Create Reference Runs: This is a critical step. Use the "Run program" function detailed in Chapter 4. Execute your ZRTE_TST_R_USR program once for each of the three variants (ALL, DS, DDIC). During each of these initial executions, make sure to check the "Is reference run?" checkbox. This action flags these specific runs as the official baseline, the "known good" state against which future changes will be measured.

With these variants created and initial reference runs saved, we have established our testing foundation.

5.3 The First Comparison

Before we introduce any modifications to our test program, let's perform a comparison to confirm that RTE sees the current state as identical to the reference runs we just created. This builds confidence in the setup.

  1. Navigate back to the ZRTE_START transaction and choose "Compare runs".
  2. Select the comparison mode: "Reference program all variants". This tells RTE to check all established reference points for the specified program.
  3. In the "Program" input field, enter the name of your test program ZRTE_TST_R_USR.
  4. Execute the comparison (by pressing F8 or clicking the Execute icon).

RTE will now diligently re-execute your program three times, once for each reference variant (ALL, DS, DDIC). It will then compare the data captured during these new executions against the data stored in the corresponding reference runs created moments ago. Logically, since we haven't altered the program's code, the results should perfectly match the references.

Decoding the Comparison Results Grid:

The outcome of the comparison is presented in an ALV grid.

RTE comparison results ALV showing columns for reference run (left) and new run (right) details, Result, Details, Comparison type, and Error

Let's break down the columns and their significance:

In our initial baseline check we should observe satisfying green "Equal" statuses across the board for both GT_TAB and SO_BNAME for all three variants (ALL, DS, DDIC).

Performance notice. During RTE runs, the variables selected for testing are serialized and stored in the database. During comparisons, this data is retrieved and analyzed. For large datasets, database read/write operations can significantly impact performance. The comparison process may also be memory-intensive, as RTE needs to hold the "before" and "after" states, along with the computed differences — in total, this can require up to three times the size of the original variable state in memory.

Simulating a Code Change

To see how RTE flags discrepancies, let's simulate a simple code change. Imagine we go back into the ZRTE_TST_R_USR program and comment out the specific line responsible for exporting the selection criteria: ZCL_RTE=>EXPORT_DATA( iv_var_name = 'SO_BNAME' i_data = so_bname[] ).. After saving and activating this change, we re-run the "Reference program all variants" comparison.

RTE comparison results showing 'SO_BNAME' as 'Missing' on the right (new run) side

The results grid now tells a different story. For the variable SO_BNAME, the Result column will now display the orange "Missing" status. Furthermore, the right-hand side columns (Variant, Calling Program, Variable) for the SO_BNAME rows will be empty, visually confirming that this variable was present in the reference run (left side) but was not captured in the new execution due to our code change.

5.4 Drilling Down

Let's set up a scenario where we know the data content differs. We'll use the "2 runs" mode to compare the reference run created using the ALL variant against the reference run created using the DS variant. We know these contain different sets of users.

  1. Navigate to "Compare runs".
  2. Choose the mode: "2 runs".
  3. Utilize the search helps provided for the "Run A" and "Run B" fields. For "Run A", locate and select the Run ID corresponding to the reference run of ZRTE_TST_R_USR executed with variant ALL. For "Run B", select the Run ID for the reference run using variant DS.
  4. Execute the comparison with default settings.
RTE comparison results showing 'Not equal' for RAW data comparison between two different variants

For the GT_TAB variable, the RAW data comparison will correctly report Result = Not equal, but the Details column will remain inactive, offering no further insight.

Activating and Utilizing the iData Comparison:

To unlock the detailed view, we need to explicitly instruct RTE to perform the iData comparison:

  1. On the "Compare runs" selection area, locate and check the "Advanced options" checkbox. This reveals additional control buttons.
  2. Click the newly visible "Comparisons" button.
  3. A configuration pop-up appears, showing available comparisons. Select "iData", for clarity in this example, you might also unselect "Raw" and "Description". The "iData parameters" button (explained in section 5.5) will also become active if "iData" is selected.
  4. Now, re-execute the comparison from the main screen.

The results grid should now reflect that the iData comparison was performed for GT_TAB. The Result will still be "Not equal", but critically, the Details column for that row will now display the magnifying glass icon.

Inspecting the Differences:

This icon is your gateway to understanding the data mismatch. Double-click the magnifying glass icon associated with the GT_TAB comparison row.

iData difference display pop-up with 'Tab A', 'Tab B', and 'Diff' tabs showing detailed data

A detailed comparison window pops up, typically featuring three informative tabs:

A Note on Test Data Stability: This example also highlights a crucial aspect of effective regression testing and configuration verification: the stability of your test data environment. If your reference runs capture data that is subject to unrelated changes (like the creation of new users in a development system, or master data changes impacting a configured process), comparisons might frequently show "Not equal" simply because this underlying master data has drifted since the reference run was created. This isn't necessarily a failure of your program code or configuration. Therefore, when designing your test variants and creating reference runs, strive to use selection criteria or data snapshots that are stable or where changes are well understood, allowing you to more clearly isolate differences caused by actual code modifications or intended configuration outcomes.

When results are "Not equal," it's important to analyze whether this is due to an intended code change, a bug, an expected outcome from a configuration change, or an external data factor.

5.5 Handling Differences and Leveraging Data Mapping

Let's walk through how RTE assists in verifying code changes or the impact of configuration adjustments, including situations where the structure of the data itself is modified, necessitating RTE's data mapping capabilities.

Scenario 1: Verifying a Targeted Functional Change

Scenario 2: Dealing with Structural Changes

iData Parameters and Mapping

This is where RTE's mapping features become indispensable. They allow you to define rules that tell RTE how to reconcile structural differences before performing the data comparison. The "iData parameters" button becomes available under the "Advanced options" section when iData comparison is active. Clicking this button is your entry point into the mapping configuration.

'iData parameters' button active, pop-up showing variables from Run A and Run B for mapping definition

Applying Mapping

Our immediate goal is to compare the data of the original columns, effectively telling RTE to ignore the newly added USTYP column for the purpose of this comparison. This allows us to verify if the data within the pre-existing fields was unintentionally altered by our structural change.

  1. On the "iData parameters" screen, locate the row representing the ZRTE_TST_R_USR GT_TAB comparison.
  2. Since the structural change (the added USTYP column) occurred in the new execution (Side B), we need to modify its transformation rules. Click the mapping icon button located in the Transform. B column for the GT_TAB row.
  3. This action opens the detailed "Transformation" pop-up window for the GT_TAB variable from the new run. Transformation mapping pop-up for a variable, showing field list with Include, Sequence, Name, Type, etc.
  4. Transformation Options: This pop-up is the heart of data mapping in RTE. Let's examine its components thoroughly:
    • Insert/Collect Switch: Allows you to choose the processing logic. Insert (default) treats each row individually. Collect applies ABAP's COLLECT statement logic, which aggregates rows based on non-numeric key fields – useful for specific aggregation scenarios before comparison.
    • WHERE condition Field: A powerful filtering mechanism. Here, you can enter conditions using standard ABAP WHERE clause syntax (but without typing the WHERE keyword itself). For example, BNAME NE 'SAP*' or STATUS EQ 'A' AND VALUE GT 100. RTE performs a syntax check on the condition entered.
    • The Field Mapping Grid: This grid lists all fields present in the variable's current structure (in this case, GT_TAB from the new run, including USTYP). Each row allows detailed control:
      • Include Checkbox: The primary control for including or excluding a specific field from the comparison. If unchecked, the field is completely ignored.
      • Sequence / Original name Columns: These are display-only, showing the field's original position and technical name in the source structure for reference.
      • Seq no. Column: Editable. This numeric field dictates the column order in the final, mapped structure that will be used for the comparison. You can change these numbers to reorder columns.
      • Name Column: Editable. Defines the field name in the mapped structure. This allows you to rename fields if necessary for the comparison (e.g., if comparing MATNR from one table to MATERIAL in another).
      • Type, Length, Decimals Columns: Editable. These allow you to change the data type ('C', 'N', 'P', 'D', 'T', 'STRING', etc.) and corresponding size attributes of a field specifically for the comparison. This is crucial when comparing fields that store similar data but have slightly different technical definitions. Adhere to standard ABAP type definitions. The search help (F4) on the Type field provides guidance on valid types and whether Length/Decimals are applicable. RTE validates these settings and will report errors if inconsistent (e.g., trying to specify a length for a STRING type: For Type g LENGTH must be equal 0, and DECIMALS must be equal 0). Search help for 'Type' field in transformation mapping, showing ABAP data types and applicability of Length/Decimals
      • FM Name Column: Editable. For advanced scenarios, you can specify the name of a standard SAP conversion exit function module (e.g., CONVERSION_EXIT_ALPHA_INPUT) to transform field values before comparison.
    • Renumerate Button (Pencil Icon): A utility button. After including/excluding fields, clicking this button automatically re-calculates and assigns sequential numbers to the Seq no. column for all included fields, ensuring a clean sequence.
    • OK (Checkmark) / Cancel (X) Buttons: Located at the bottom (OK) or top (Cancel). Use OK to save the mapping changes you've made within this transformation pop-up. Use Cancel to discard them (you will be asked to confirm "Exit without saving?").
  5. Applying the Mapping: In our scenario, scroll through the field grid within the "Transformation" pop-up for Transform. B until you find the row representing the newly added field, USTYP . Uncheck the Include checkbox for this specific row.
  6. Optionally, click the Renumerate button (pencil icon) to update the Seq no. column for the remaining included fields.
  7. Click the OK (Checkmark) button to confirm and close the "Transformation" pop-up, saving the rule to exclude USTYP from side B.
  8. You are now back on the main "iData parameters" screen. Click its OK (Checkmark) button to apply all defined parameter settings.
  9. Finally, re-execute the comparison from the main "Compare runs" screen.

Interpreting the Result: The comparison for GT_TAB should now proceed without the "Different structures" error. We successfully used mapping to isolate and ignore the structural change, allowing us to focus the comparison on the stability of the original data fields.

Scenario 3: Filtering

Let's revisit Scenario 1 (hardcoding SAP* class) where variants ALL and DS showed Result = Not equal. While this correctly identified the change, perhaps our test goal is to confirm that apart from the intentional change to SAP*, no other users were affected. We can achieve this using mapping filters.

  1. Go back to the "Compare runs" screen, ensure "Reference program all variants" and iData comparison are selected.
  2. Click on "iData parameters".
  3. Locate the row for ZRTE_TST_R_USR GT_TAB. We need to apply a filter to both sides of the comparison to exclude the SAP* user record before the data is compared.
  4. Click the Transform. A icon button for GT_TAB. In the "Transformation" pop-up, find the Where condition input field and type the condition: BNAME NE 'SAP*'. Click OK.
  5. Click the Transform. B icon button for GT_TAB. In its "Transformation" pop-up, enter the exact same Where condition : BNAME NE 'SAP*'. Click OK.
  6. As an optional step, if we are completely uninterested in the SO_BNAME variable for this specific test run, we can uncheck the main Include checkbox for the ZRTE_TST_R_USR SO_BNAME row on the "iData parameters" screen itself. This will skip its comparison entirely.
  7. Click OK on the "iData parameters" screen to apply these rules.
  8. Re-execute the comparison. RTE comparison results after applying WHERE condition and excluding a variable, showing more 'Equal' results

Analysing the Refined Result: Now, when you examine the results grid, the Result for GT_TAB should show "Equal" even for the ALL and DS variants. Why? Because the mapping rules we defined instructed RTE to filter out the SAP* user row from both the reference data and the new run data before performing the comparison. This demonstrates how mapping can be used to focus the comparison and ignore known or intentional differences, allowing you to verify the stability of the remaining data set. Transformation mapping showing 'USTYP' field excluded and Renumerate button highlighted

5.5.1 Multiple Variable Entries

As you continue to develop your program and update your reference runs, you might encounter situations where the "iData parameters" screen lists the same logical variable (GT_TAB) multiple times. This occurs when the underlying data structure of that variable has changed over time, and different reference runs (perhaps for different variants, or older vs. newer references for the same variant) capture these different structural versions.

Let's illustrate with an example:

  1. Initial State: Suppose you have existing reference runs for several variants (ALL, DS) of ZRTE_TST_R_USR, where GT_TAB has an old structure (without the USTYP field).
  2. Structural Code Change: You modify ZRTE_TST_R_USR to add the USTYP field to GT_TAB.
  3. Create a New Reference Run: Now, you create a new reference run specifically for the ALL variant. This new reference run for ALL will capture GT_TAB with its new structure (including USTYP). However, the reference run for the DS variant (if not updated) might still be based on the old structure of GT_TAB.
  4. Observation in "iData parameters": If you now go to "Compare runs", select "Reference program all variants" (which would include both ALL and DS), and then click the "iData parameters" button, you should see something like : 'iData parameters' screen showing GT_TAB listed multiple times due to different structures in reference runs

Notice that GT_TAB (from ZRTE_TST_R_USR) appears on multiple lines on both the "Prog/Variable A" (reference) side and the "Prog/Variable B" (current run) side.

Explanation of Multiple Entries:

RTE lists a variable in a new row on the "iData parameters" screen if its underlying data structure is different from other instances of that same logical variable being considered in the current comparison batch.

Key Takeaway

RTE does not automatically merge for the same logical variable if their underlying structures differ across the runs being set up for comparison. Each unique structural version of a variable will get its own line in the "iData parameters" list, allowing you to define specific mapping rules tailored to how that particular structural version should be compared against its counterpart. This ensures precise control over the comparison process, even as your programs and their data structures evolve.

5.6 Cross-Check

One of RTE's most advanced capabilities is the "Cross-Check" feature. This allows you to validate your custom program's output not just against its own previous versions, but against the output of an entirely different program, typically a trusted SAP standard report or a well-established custom tool that performs a similar function. This is invaluable for ensuring your custom development aligns with standard SAP logic or established business processes.

Setting up a Cross-Check:

  1. Instrument the Trusted Program: First, you need to ensure the program you want to compare against (the 'reference' program, standard report RSUSR002) also exports the relevant data via RTE. This requires adding a ZCL_RTE=>EXPORT_DATA call at an appropriate point, potentially using SAP's Enhancement Framework if modifying standard code.
    • Example: As shown in the code below, you might create an enhancement implementation at the end of function module SUSR_USERS_LIST_ALV (used by RSUSR002) to export its final user list:
      ENHANCEMENT 1 Z_RTE_EXPORT. "active version
        ZCL_RTE=>EXPORT_DATA( iv_var_name = 'GT_USERSLIST' i_data = gt_userslist ).
      ENDENHANCEMENT.
  2. Configure the Cross-Check:
    • Navigate to the "Compare runs" screen. Check the "Advanced options" checkbox.
    • Click the "Cross check" button. 'Cross check' pop-up in Advanced Options for defining comparison between Program A and Program B
    • A configuration pop-up appears specifically for defining cross-program comparisons.
    • In the Program A row, enter the name and variant of the program you want to use as the reference point (Program: RSUSR002, Variant: ALL).
    • In the Program B row, enter the name and variant of your program under test (Program: ZRTE_TST_R_USR, Variant: ALL). This often defaults based on what you entered on the main comparison screen.
    • Click the "Add run" button. This schedules this specific cross-program comparison pair to be executed. You can add multiple pairs if needed.
    • The "Show runs" button lets you review the list of scheduled cross-checks. The "Delete runs" button clears the entire list if you need to start over.
    • Close the "Cross check" window using the standard close icon (X).

Mapping: The Essential Ingredient for Cross-Checks

Meaningful comparison between two different programs almost always requires data mapping, as it's highly unlikely that the exported variable names, field names, data types, and field order will coincidentally match perfectly.

  1. After configuring the cross-check and closing its window, click the "iData parameters" button (ensure iData comparison is selected in "Comparisons"). 'iData parameters' screen showing a cross-check setup; Prog/Variable B is initially empty requiring selection
    • The "iData parameters" screen will now list the cross-check pair you scheduled. You'll see the variable exported from Program A (RSUSR002 GT_USERSLIST).
    • However, the Prog/Variable B column for this row will likely be empty initially. RTE cannot automatically guess which variable from Program B should be compared to the variable from Program A. Click the search help icon located within the empty Prog/Variable B field.
    • RTE will present a list of variables exported by Program B (ZRTE_TST_R_USR in our case, showing GT_TAB and SO_BNAME). Select the variable that logically corresponds to the data from Program A (select GT_TAB).
  2. Define Transformations: Now you must define mapping rules using the Transform. A and Transform. B buttons to reconcile the differences between RSUSR002 GT_USERSLIST and ZRTE_TST_R_USR GT_TAB.
    • Analyze and Map Transform. A (RSUSR002 GT_USERSLIST): Open its transformation window. Identify the key fields that have equivalents in GT_TAB (BNAME, USTYP, CLASS). Exclude all other fields from GT_USERSLIST that are not relevant for this comparison by unchecking their Include boxes. Use the renumber button to finalize the sequence for side A. Transformation mapping for GT_USERLIST (Program A), showing only selected fields included for cross-check
    • Analyze and Map Transform. B (ZRTE_TST_R_USR GT_TAB): Open its transformation window. Transformation mapping for GT_TAB (Program B) before adjustments for cross-check
      • Field Matching: Ensure only the fields corresponding to those kept in Transform A are included. Exclude any extra fields unique to GT_TAB (like SYSID).
      • Order Alignment: Adjust the Seq no. values in Transform B so that the order of included fields exactly matches the sequence defined in Transform A (e.g., if A is BNAME, USTYP, CLASS, then B must also be BNAME, USTYP, CLASS in that order). Alternatively, you could adjust A to match B's original order – the key is that the final mapped order is identical on both sides.
      • Type and Length Harmonization: Carefully check the Type, Length, and Decimals of the corresponding fields between A and B. If they differ (e.g., USTYP is C(2) in one and C(4) in the other), you must adjust one side in the mapping to match the other.
        Critical Best Practice: To avoid potential data loss during comparison due to truncation, always modify the field with the shorter length to match the longer length. In the C(2) vs. C(4) example, you should change the C(2) field's mapping definition to C(4), rather than shortening the C(4) field to C(2).
      Transformation mapping for GT_TAB (Program B) after adjusting field order and exclusions for cross-check
  3. Once both transformations are defined, click OK on the transformation windows and the main "iData parameters" window.
  4. Execute the comparison. RTE comparison results for cross-check, showing 'Not equal' for GT_USERSLIST vs GT_TAB and 'Missing' for SO_BNAME

5.7 Approving New Reference Runs

Development and configuration changes are iterative. After implementing changes (code or configuration), performing comparisons, meticulously analyzing differences using mapping, and ultimately confirming that the program's current behavior is indeed the new correct baseline, your original reference runs might become obsolete. The mappings required to compare against them might become complex and counter-productive for future tests.

RTE provides a streamlined way to update your baseline. Instead of manually re-creating reference runs one by one using the "Run program" transaction, you can approve the runs that were just generated during the comparison process itself, promoting them to become the new official reference runs.

  1. Verification is Key: Before proceeding, be absolutely certain that the results shown in your current comparison grid accurately represent the desired, correct state of the program following your latest changes.
  2. Initiate Approval: In the ALV toolbar displaying the comparison results, locate and click the "Approve" button (depicted with a checkmark). 'Approve' button highlighted in the comparison results ALV toolbar
  3. Final Confirmation: RTE understands the significance of this action and will present a confirmation pop-up dialog box. It explicitly warns you that clicking "Yes" will designate the runs just executed during the comparison as the new reference runs and, crucially, that this action cannot be undone. 'Approve?' confirmation pop-up: Mark new runs as reference runs? This cannot be undone!
  4. Commitment: Only if you are completely confident that the current state is the correct new baseline should you click "Yes".

Consequences of Approval: When you approve, RTE updates its internal records. For each program/variant combination included in the comparison, the run generated during that comparison execution replaces the previous run that was marked as the reference.

Strong Recommendation: Making approval a regular part of your workflow after verifying code or configuration changes is highly recommended, especially following any modifications that alter the structure (fields, types, order) of variables exported via EXPORT_DATA, or after confirming a configuration change has the desired, stable outcome. Approving the new state as the reference significantly simplifies subsequent regression tests or verifications. Future comparisons against this updated reference will start from a structurally identical baseline, reducing or entirely eliminating the need for potentially complex data mappings that were required to bridge the gap with the older, now-obsolete reference run. This keeps your testing process efficient and focused on detecting new, unintended changes.

5.8 Preserving Your Mapping

Configuring intricate mapping rules, especially for complex structural changes or cross-program comparisons, can involve considerable effort. Repeating this setup for subsequent test cycles would be inefficient. RTE provides a standard way to save your entire comparison configuration, including all mapping details, for future use:

Chapter 6: Managing Test Runs

Over time, as you conduct numerous tests and verifications, your RTE system will accumulate a history of test runs. The "Manage runs" utility, accessible from the main ZRTE_START transaction, provides a straightforward interface to view the details of these created runs and, importantly, to delete those that are no longer needed, helping to keep your test data repository organized and efficient.

6.1 Selecting Runs

Upon selecting "Manage runs," you are presented with a selection screen that allows you to precisely filter the runs you wish to view or potentially delete.

'Manage runs' selection screen with criteria like RunID, Program, Variant, etc., and Mode options View/Delete

You can specify your search criteria using a combination of the following fields:

Below the selection criteria, you'll find a "Mode" section with two options:

6.2 Viewing and Deleting Runs

After setting your selection criteria and choosing a mode, executing the screen will display an ALV grid listing all runs that match your input.

'Manage runs' ALV results with selection checkboxes on the left and delete (trash can) icon on the toolbar highlighted

The ALV grid shows the same fields available on the selection screen (RunID, Program, Variant, Is ref?, Description, User, Date, Time), providing a clear overview of each run.

6.3 Recommendations for Managing Runs

Effective management of test runs is important for system performance and clarity.

Regularly using the "Manage runs" tool to clear out obsolete data will help ensure your RTE environment remains lean and focused on the most relevant test information.

Chapter 7: Summary and Cheatsheet

RTE enables a systematic approach to regression testing and configuration verification. Developers or functional consultants instrument code (custom programs or standard SAP programs via enhancements) using ZCL_RTE=>EXPORT_DATA to define test variables. Initial "reference runs" are created via ZRTE_START (transaction "Run program") before code or configuration changes, capturing the baseline state. After modifications or configuration deployment, the "Compare runs" function is used to re-execute the program or compare historical runs, highlighting differences against references. Comparisons include RAW data (fast yes/no check), Description (structure check), and iData (detailed content analysis). For structural changes or cross-program tests, "iData parameters" allow extensive data mapping. If new results are correct and verified, they can be "Approved" as the new reference. "Manage runs" facilitates viewing and deleting old test data. This iterative cycle ensures quality and stability for both development and configuration changes. Key users can also execute comparisons for pre-instrumented programs to validate outcomes.

7.1 RTE Cheatsheet

Here's a quick reference to the most essential RTE elements:

Appendix A: Glossary of Terms

Appendix B: Common Issues & Troubleshooting