RRAFS RobotJ Implementation with STAF

Updated:
by Carl Nagle

Transparency   Services  

Overview

This is an overview for the RobotJ Integration project that is currently in progress. (See the RRAFS RobotJ Integration document.) For brevity, Rational Robot Classic will be referred to as "Classic". Rational RobotJ will be referred to as "RobotJ".

The RobotJ implementation closely mimics the easy-to-expand modularity implemented with Classic. Thus, for most of the new functionality, we have implemented a single "Driver" module which mimics a very small subset of the StepDriver functionality found in Classic. We can do this because each call (test record) to RobotJ will have already been fully processed by the Classic StepDriver.

The job of the Driver module is to route an incoming test record according to its record type. Initially, the record types are likely going to be limited to Driver Commands and Component Functions. (Other record types would have already been handled by Classic at this point.)

As in the Classic StepDriver, the RobotJ Driver will invoke a Driver Command parser or a Component Functions router. See Fig. 1 below. Thus, the Driver Command parser function and Component Functions router function closely mirror their Classic implementations.

RobotJ Routing
Fig. 1

This model of implementations is very beneficial. We can easily enhance and add RobotJ Driver Commands and Component Functions in their respective modules without impacting other modules. This will limit or eliminate changes or maintenance in the STAF support module to handle new features and functions. For example, invoking any Driver Commmand or any Component Function in RobotJ goes through the same STAF function call.


Overview   Services  

Transparency

In order to minimize the burden on users to prepare their application for testing, we are striving to keep RobotJ usage as transparent as possible. We must not require the user to build an Application Map for Classic AND have an Application Map for RobotJ. Yet, if a user has a RobotJ Application Map available, we will take advantage of that. Obviously, users who wish to enable or use RobotJ functionality will still have to make sure that the environment is properly enabled.

Transparency will require that we are able to locate and properly identify the correct GUI object in RobotJ using the same information we provide to Classic. That generally means we will use Classic recognition strings to locate RobotJ objects. No small feat. Yet, the overall algorithm becomes a tool-independent model for locating objects based on tool-independent recognition strings. That is a nice step to be taking.

Transparency also means that certain global services--like logging and DDVariable storage--must be generically handled such that Classic, RobotJ, and everything else are all logging to the same log, and referencing the same data storage. We have implemented our own STAF services to accomplish this. Visit the STAF website for more info on what STAF has to offer.


Overview   Transparency  

Services

One of the things we have needed for a very long time is to eliminate unnecessary dependence on a specific automation tool for services that can be rendered elsewhere. As mentioned earlier, generic text or XML logging, file handling and parsing, and DDVariable storage are good examples of services that really should be tool independent. (The DDVariableStore.DLL is tool independent, but not yet platform independent. I also have concerns that it may not work properly across processes as intended.)

The integration of RobotJ into the framework helps push us down that path a little sooner because we make both Classic and RobotJ log to the same log, access the same data storage--simply share the same reusable code and modules to avoid duplicate implementations.

As mentioned earlier, we are initially targetting to integrate STAF services to accomplish this.

Our first goal was to provide shared logging services since logging information and status is so critical. Both Classic and RobotJ access the same log via the same service. This migration for the Classic engine is intended to be transparent. The existing LogUtilities API does not change. The intent is to have zero impact on existing libraries and scripts. Where the changes are NOT transparent, they will simply be surfaced in new features and functions that do not impact existing libraries and scripts.

Beyond logging, we have services for shared variable storage and App Map handling. Later we will have independent test file handling and parsing. The ultimate goal is to only have to deal with tool-dependent Driver Commands and Component Functions. Everything else should be independent and reusable for each new engine.

Overview   Transparency   Services