Bonita Sub-Processes

Playing around with Bonita sub-processes gave me a couple of interesting discoveries…

The first step was to create a sub-process by selecting tasks and using the context menu to create a new sub-process. A sub-process is a closed process connected to the main process by an interface.

But before a list of founding using the ‘create subprocess’ function.

  1. The new process lacks of a lane. It’s easy to create and you should do it to define a default actor.
  2. The new process lacks of a start and end point. It’s working without but for consistency and a defined flow you should create them.
  3. Every Task will be renamed to ‘Copy of ‘. That’s ugly.
  4. The ned process is disconnected from the main process. This means different variables. You need to map the in and out variable mapping to transfer date between the processes. This will not be done automatically after creating the sub-process. But it will be done at creation time. But the mapping is not correct at all.
    To fix the main-to-sub mapping change the mapping type from ‘Assigned to Contract Input’ to ‘Assigned to Data’.
    To create the sub-to-main mapping use the ‘Auto map’ button.
  5. Sub processes have a separated set of actors also. You need to map it separately.
  6. It’s not possible to stop the main process inside the sub-process. Every ‘end’ will jump back into the main process execution. This could be a problem if fatal errors occur inside the sub-process.

To use the interface between main and sub-process you can use the variables mapping as described below. For every new variable you need extend the mapping. But you are free to use the sub-process in different situations and map it with multiple data sets.

More interesting is the possibility to send errors to the calling process. Use the ‘end error’ endpoint and define a error code. Add the ‘catch error’ event at the ‘call activity’ and you can handle the error result of the sub-process. Important: No data will be transferred from sub to main process in case of an error.

You can use my example process to explore the behavior. Initial, Step1 to Step4 are the default flow. Steps 2 and 3 are part of the sub-process. Try setting of variable values thru the process. In Step3 you can choose the ‘Error’ button to provoke an error. Use it and you will recognize that the data will not be changed in the main process. Download!

Sub-processes are interesting to separate or to re-use parts of the main process. But the benefits are rare if the creation and maintenance process should be simple.

BPM Error Handling Best Practice

Creating Business Processes using a BPM (in my case Bonitasoft BPM) we had the problem to handle failures in the right way. In the first time we tried to catch all errors and handled them with a End/Terminate-Entity. Looking backward it was a odd way to process the exception states.

The focus should be by maintenance and at most by the customers using the system. Customers don’t want to reinitialize a process every time an error occurred. The want to maintainer to fixe it and let the process flow. Maintainer don’t want to have much work with processes running.

I should show two very common scenarios happen in the real life:

  1. All processes are based on tasks using operations working over the network. Maybe sending mails using a database etc. A lost of network connection (maybe only a segment) will cause a lot of tasks to fail and trigger the error handling.
  2. User is creating a process instance inserting data that’s simply wrong. But the data can only be validated later in the flow. For example a wrong customer or contract id.

The first case shows a technical problem. It should be fixed by the administrators and then the processes should be restarted and do the work. It’s a technical failure.

The second case shows a professional problem. We have the wrong information from the user. A task will fail and could not be done, even if it will be retried. In this case a task error handler trail must be followed. And important: There could occur different incidents.

To implement this concept we changed the definition of automatic tasks with connectors. A connector should throw an exception if something technical went wrong. This exception should the task force to change status to ‘failed’. In this case we can retry the task if the technical problem is solved.

Next we check the return values of the connector. If something professional went wrong the returned data should contain such a information, like “returncode=-5” or empty values “customerId=”. Small post processor scripts can check the data and fire errors handlers to  jump into another part of the process.

The following example shows the behavior. I used a manual task to insert the ‘returned data’

Bonita_BPM

and validate it with post processors

Bonita_BPM_Error_post

The processors are very simple, e.g ‘checkStatusNo’:

if (status.equals("no"))
  throw new Exception("status is no");

The ‘checkStatusError’ script will change the status of the task to ‘failed’. It’s the situation of a technical failure.

In this way processes are more robust, customers are more happy and visual representation is more clear. It’s a win-win situation😉

Download the sample process.

New scope of mhus.de and mhus.org

In the next Time I will divide the home of Open Source and the tech-Talk of this Blog. The Domain mhus.org is already registered and I am searching for a Hosting Platform for the Open Source home of the de.mhus Projekts. The mhus.de blog will be a more open minded and creatively driven channel.

Migrate from Karaf 3 to 4, Part 2

Migrating from karaf 3 to 4 another funny thing happened. All my jdbc datasources, configured in the deploy folder, where gone. In the first moment I was very hysteric because we want to migrate the productive environment the next days. But in the next moment I recognized that we did all the test cases without an impact.

Playing around and in the end a deeper look into karaf sources showed me the solution. The new commands provided by karaf are using a more complex query and filter to find jdbc datasources. The new command jdbc:ds-list need a property ‘dataSourceName’ to be defined on the service to show the datasource in the list. The datasource itself was present like before but not shown.

First I reimplemented the old command jdbc:datasources and showed all the datasources present as they are, by implemented interface (mhus-osgi-tools). Then I changed all the blueprint xml files and append the claimed property

<entry key=”dataSourceName” value=”${name}”/>

to be compatible to the new jdbc commands of karaf.

Shell: Simple bundle watch list

Creating a watchlist could be laboriously (what a word :-/). Therefore using shell scripting could help a lot.

The first sample shows how to grep a list of interesting bundles to watch. In my case it’s all mhu-lib bundles (add ‘–color never’ to avoid creation of violating escape sequences):

karaf@root()> bundle:list|grep --color never mhu-lib
 89 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-annotations
 90 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-core
 91 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-jms
 92 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-karaf
 93 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-persistence
karaf@root()>

I only need the bundle names, so cut the last column out of the result:

karaf@root()> bundle:list|grep --color never mhu-lib|cut -d '\|' -f 4 -t
mhu-lib-annotations
mhu-lib-core
mhu-lib-jms
mhu-lib-karaf
mhu-lib-persistence
karaf@root()>

Now we need to parse it line by line. A loop would help. The results are used to add the bundle to the bundle:watch list

bundle:list|grep --color never mhu-lib|cut -d '\|' -f 4 -t|run -c "for b in read *;bundle:watch \$b;done"

The ‘read *’ command reads everything from the pipe and the for loop will cut it into lines and run the loop for every entry. The line content is stored in ‘b’. To stop replacement of ‘$b’ by the shell itself (should be done later in the loop) you need to escape the ‘$’ character.

If you want to use a persistent bundle watch use the ‘mhu osgi tool’ called ‘bundle:persistentwatch’. You need to add the entries to the persistent list.

bundle:list|grep --color never mhu-lib|cut -d '\|' -f 4 -t|run -c "for b in read *;bundle:persistentwatch add \$b;done"

Print the list using ‘list’:

karaf@root()> bundle:persistentwatch list
Bundle             
-------------------
mhu-lib-annotations
mhu-lib-core       
mhu-lib-jms        
mhu-lib-karaf      
mhu-lib-persistence

 

Karaf: Scheduling GoGo Commands Via Blueprint

A new feature with mhu-lib 3.3 is the karaf scheduling service. The service is designed to be configured by blueprint and executes gogo-shell scripts. In this way you are able to execute every regular maintenance by automation.

Use this sample blueprint to print a hello world for every 2 minutes:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
    <bean id="cmd" 
          class="de.mhus.lib.karaf.services.ScheduleGogo" 
          init-method="init" destroy-method="destroy">
      <property name="name" value="cmd_hello"/>
      <property name="interval" value="*/2 * * * *"/>
      <property name="command" value="echo 'hello world!'"/>
      <property name="timerFactory" ref="TimerFactoryRef" />
    </bean>
    <reference
       id="TimerFactoryRef" 
       interface="de.mhus.lib.core.util.TimerFactory" />
    <service 
      interface="de.mhus.lib.karaf.services.SimpleServiceIfc" 
      ref="cmd">
        <service-properties>
            <entry key="osgi.jndi.service.name" value="cmd_hello"/>
        </service-properties>
    </service>
</blueprint>

Migrate shell commands from Karaf 3 to Karaf 4

Today the migration from Karaf 3 to version 4 brings some new interesting effects. One of them is a full yellow ‘blinking’ source code where shell commands are implemented.

It looks like all the shell interfaces from version 3 are deprecated now. The reason is that the developers want to define commands without using blueprint definition files in the OSGI INF folder any more. But to establish the new way a new interface is created and in focus.

To use the new interface you first  have to change the maven configuration of your project. Add the following parameters:

 <felix.plugin.version>3.0.1</felix.plugin.version>
 <maven.version>2.0.9</maven.version>

And the following parts inside your main pom.xml:

<dependencyManagement>
 <dependencies>
 <dependency>
 <groupId>org.apache.felix</groupId>
 <artifactId>maven-bundle-plugin</artifactId>
 <version>${felix.plugin.version}</version>
 </dependency>
 <dependency>
 <groupId>org.apache.maven</groupId>
 <artifactId>maven-plugin-api</artifactId>
 <version>${maven.version}</version>
 </dependency>
 </dependencies>
 </dependencyManagement>
 <pluginManagement>
 <plugins>
 <plugin>
 <groupId>org.apache.karaf.tooling</groupId>
 <artifactId>karaf-services-maven-plugin</artifactId>
 <version>${karaf.version}</version>
 <executions>
 <execution>
 <id>service-metadata-generate</id>
 <phase>process-classes</phase>
 <goals>
 <goal>service-metadata-generate</goal>
 </goals>
 </execution>
 </executions>
 </plugin>
 </plugins> 
 </pluginManagement>

Now you need to add the following build instruction to every sub project into the build/plugin part of the pom.xml:

 <plugin>
 <groupId>org.apache.karaf.tooling</groupId>
 <artifactId>karaf-services-maven-plugin</artifactId>
 </plugin>

This was the basic configuration to instruct maven to build everything right. Now you can remove the old blueprint.xml files because they are no more in use.

To create or update command add the following imports:

import org.apache.karaf.shell.api.action.Action;
import org.apache.karaf.shell.api.action.Argument;
import org.apache.karaf.shell.api.action.Command;
import org.apache.karaf.shell.api.action.Option;

Mark the class as component and service and enhance ‘Action’:

@Command(scope = “test”, name = “cmd”, description = “Test Command”)
@Service
public class CmdTest implements Action {

The old interface had a method ‘execute(Session)’ but the new one is only ‘execute()’. The parameter is left. To have access to the session you need to add a reference variable like this:

@Reference
private Session session;

after building and deploying into karaf engine the command is available as usually.