研究、分享我学习零售业SAP的历程
------------打造中国第一个零售SAP博客

 

Part-III of this series will focus on the Business Process Engine (BPE) component of SAP Exchange Infrastructure as shown in the figure below.

image

Cross Component Business Process Modelling(ccBPM) within SAP Exchange Infrastructure is mainly focused on modelling messages at the Integration layer based upon the message payload data. ccBPM has a design component(j2ee) to desgin the model in Integration Respository and a runtime component component called Business Process Engine (ABAP) which is similar to SAP Workflow Engine in any SAP Application. During runtime BPE communicates with the Integration Engine while executing the pipeline service steps.

The developer has to carefully analyze and think twice before designing a ccBPM model for any business scenario integration. The correct usage of ccBPM is well documented at http://help.sap.com. Model design palys a very important role on the performance of the business scenario in the integration server. A bad ccBPM design can have various perfromance issues in various runtime components (IE, AE, BPE). Note that every message entering the BPE is duplicated and every message leaving the BPE is also duplicated. Also these duplicate messages are persisted to the database with different payload versions.

Tuning Business Process Engine:

Keep in mind that there are no tuning possibilities for persistence, work items creation and the order of work items in BPE. They have to be taken care at the design phase itself. Other areas where tuning is possible in BPE are:

1. Tracing:

Go to Transcation: SWELS and switch the event trace off in the Production Integration Server. Check transaction:SWF_TRC for an empty trace list.

2. tRFC :

By default, the workflow can use all the tRFC resources in the system to execute mainly send and transform steps in the process. Other processes like Pipeline Services can also use the same tRFC resources during runtime.

A. In Trancastion: SMQS, register the workflow desitnation: WORKFLOW_LOCAL_<Client#> in order to restrict connections for the BPE. Also tune the tRFC resources in parallel to this one so that the resource consumption by BPE and tRFC is controllable.

B. Create a separate RFC server group in transaction: RZ12 and assign it to QOUT (Scheduler) via transaction: SMQS, so that the load created by BPE steps( Send, Transform) is restricted to the server group. The only limitation is that, the new RFC server group is used for all outbound tRFC calls.

Try to optimize the integration process as much as possible via the design of the interface like, if possible use the Transformation step outside the process, split mappings into integrtaion server pipeline process rather than BPE, and make sure to add the resource requirements in Quick Sizing exercise for ccBPM type scenarios.

3. ccBPM Inbound Queue Processing :

Inbound processing is a runtime component of SAP-XI’s ccBPM that links the Integration Engine to the Business Process Engine. It defines the way how SAP-XI messages are handed over to the receive steps of process instances. The messages can start a new process instance or can be bound to a running process instance using a correlation.You can configure inbound processing to improve the performance and message throughput.

image

The runtime of the Business Process Engine supports the following delivery modes for inbound processing:

A. Classic setting Inbound processing with buffering (default):

In this setting, a message is assigned to an existing process instance provided it has a matching active correlation. It is independent of whether a corresponding receive step is active or not. If no active receive step is currently available to receive this message, the message is buffered in a separate,process-instance-specific queue. As soon as the corresponding receive step is reached, the system delivers the first message in the queue that it received which satisfies the relevant correlation. If multiple messages for one receive step have been buffered they are processed according to the sequence they entered the buffer. If the running process instance provides a corresponding active receive step for an incoming message, the message is delivered directly without any buffering.

The usage of inbound processing with buffering may impact the delivery of messages in the following ways: Even if the process instance does not provide a receive step at the present stage of the execution, the message might be buffered and delivered later to the process instance. If a process instance has a long running open correlations, messages might be delivered to the process instance being buffered even though the process instance does not provide any further receive step.

Example Use Cases: Interrupting synchronous processing in collect scenarios. Interrupting synchronous processing in request/response scenarios. Sequential receive steps in a process definition.

B. Inbound processing without buffering:

If buffering is deactivated, the message is taken up by a running process instance, if it fits to the open correlation and only if the process instance provides an open receive step at that stage of the process. The messages are handed over to the running process instance directly. Therefore the developer has to ensure that the process instance provides an open receive step for any message dedicated to that instance. If you select this setting, the qRFC entry returns an error when a message is to be delivered for which no receive step is active. The message remains in the qRFC queue and message processing for this process type is stopped.

Prerequisites for Inbound Processing Without Buffering: The processing sequence in the process definition and the sequence in which the messages for the process instance are delivered must be identical. This sequence must be guaranteed by the quality of service when the message is sent. The processing following a receive step must be executed synchronously until the next receive step.

Example Use Cases: Keep synchronous processing in collect scenarios. Keep synchronous processing in request/response scenarios. Use parallel receive steps in a process definition.

Queue Assignments for Inbound Processing:

A. One Configurable Queue:

Using queue assignment "Configurable Queue" typically is more about being able to configure and balance the available qRFC resources (e.g. prioritize certain processes) than about performance and throughput. By choosing queue assignment “One Configurable Queue”, the name of the process type specific queue will be changed to XBPE_<Task>, where <Task> represents the runtime object of the Integration Process Service. This queue can then be assigned to a dedicated server providing special processing capabilities (i.e. a high-performance server, a dedicated server for batch processing or specific logon groups).

Example Use Cases: For ease of configuration the setting “One Configurable Queue” can be chosen for all time-critical processes. Time-critical processes with overlapping correlations. Processes with overlapping correlations provoking big workload on the server.

B. Multiple Queues (Random):

Queue assignment Multiple Queues (Random) allows to execute processes without correlations in parallel queues in a more controlled way. This way, existing resources can be used more efficiently, if the system is not yet at maximum load (better scalability). Multiple parallel inbound queues are created randomly neglecting correlation settings. Messages start new process instances instead of being correlated to a running instance (even if they belong semantically together according the correlation). Therefore the setting “Multiple Queues (Random)” can only be used in all scenarios that do not require any correlation.

Example Use Cases: Async-sync bridges / Lookup scenarios. Sync-Async bridges without response messages (using acknowledgements). Multicasting / Split scenarios without response messages (using acknowledgements) Weak correlations in collect scenarios / technical collection of messages

C. Multiple Queues (Content-Specific):

The system determines the name of the queue from the message fields in the correlation container. This guarantees that messages are distributed to the correct process instance using correlations even though parallel queues are used for inbound processing. Queue assignment "Multiple Queues (Content-specific)" is also about scalability. If resources (number of work processes response, CPUs) permit this, you can increase throughput by using multiple queues instead of one. There are some prerequisites that your process definition has to fulfill if you want to make use of multiple queues that are determined content-specific:

Active correlations of a process instance must not overlap and must not be nested. The content of a correlation must not change during the life time of a process instance. The correlation has to be selective. Otherwise all messages will be processed by a single queue. Hence there won’t be any performance gain compared to the “One Queue” behavior.

Example Use Cases: Collect Scenarios. Sequential Collect Scenarios. Processes with sequential correlations.Split scenarios/Multicasting with response messages. Sync-async bridges with response messages.

References:

SAP online help documentation.
posted on 2007-08-22 11:59  会东  阅读(1153)  评论(0编辑  收藏  举报