BPMS lesson learned

Testing BPMS component

In the first part of this series, we focused on a designing system using BPMS technology for orchestrating workflow in the organisation, then we shared useful points from the developers perspective. In this third part, we will focus on a quality aspect of the solution.
I do remember a discussion with one of our QA guys regarding BPMS testing I want to share. I was asking QA for requirements on a system and curious what methodology is being used for this component.  The answer I got and  I will probably never forget was: BPMS is a minor part of the system hence we are not supposed to test it at all. The motivation behind this article is simply based on fact that this approach wasn’t correct an provide some insight what’s going on. There is no ambition to provide a complete methodology or best practices regarding testing of BPMS component. That is the role of skilled QA.
 

BPMS at the core

As BPMS is a solution for orchestration of your business services inside the house. Simply it drives the workflow. BPMS isn’t usually a decision maker. Decision-making rules are typically required to be flexible, expect frequent changes. It should reflex business changes as quick as possible. So because of that, it is not a good practice to hard-code them into processes in a form of “spaghetti code structure” (structure of if-else in several levels) which is error-prone and hard to maintain. Those are reasons for having a separate component responsible for decision making – BRE (business rule engine). So the QA task can be divided into two main objectives for functional testing. Verify for given input data:
  • all the necessary data for making a decision present at the specified point? This can be difficult because of a large amount of incoming path to the decision point. Despite the execution path you are verifying that all the data needed were gathered in the system.
  • based on decision results are the steps actioned in the correct order? Verification of the required business process.
  • are the fault recovery procedures working correctly? Switching the system to fault recovery mode and verification that the system stored all data correctly and data completeness.
For sure there can be more aspects but those are considered as the main ones. The main problem of the testing is that those aspects cannot be tested in isolation. By isolation, I mean that you cannot use standard methodologies (e.g. black box, white box, … whatever it is) and point somewhere in the system. BPMS is a system component that has “memory”. That means you cannot simply arbitrarily divide the process into parts which are you going to test in separation. Some systems can have something like “point of synchronization” (despite the execution path the system has defined data set) but this depends on a design and hence it isn’t mandatory.
Let’s have a look at possibilities. The product itself offers the feature called BUnit what is alternative to JUnit in java world. It is feature facilitating process unit testing. All invoke activities within the process are mocked – the XML reply is recorded. XML manipulation expressions and gathering data within the flow ( aspect 1) can be tested this way by correct choice of recorded data. But the tests are still taking place in artificial conditions. Aspect 3 – testing fault recovery can be tested relatively easily by this approach if there is no awkward decision during the design phase. Test analyst is the key role in this process. No need to talk about documentation of the system itself. Unit testing of BRE is completely separate chapter not discussed here.
Having verified basic functionality of the blocks – processes and subprocesses we can continue with integration testing. Usually, this kind of systems are systems with high degree of integration so it is really handy to have all back-end systems under your control. The reason no. 1 – data-driven system – the behaviour of your data depends on a data in those systems. The reason no. 2 – BPMS has “memory” (it is “state-full”). If you wanna test from a certain point in the process you have to bring the system in this point. You need to do it repetitively and in a well-defined way. The approach used in web application testing – modification of data in DB to bring the order, application and etc. to a certain state is not sufficient here. Having simulators of real back-end systems was proved as really good practice. This way you simply isolate your system and time to error localization is significantly lower. This way you can conduct integration testing of bigger functional blocks up to end-to-end testing. There is no doubt that higher level of automation is a must.
 
Advertisement
Posted in Uncategorized | 1 Reply

K-V pairs to java bean re-map

Time to time you need on your projects to remap Key-Value pairs to regular java beans. One really nasty solution to this task is to do it manually. Yea, it works but this approach is not flexible and more over it is error prone. Mapping each field is hard coded and when adding, remove or modify them you have to correct all hard-coded mappings what is really awkward.

ObjectMapper

Another approach is to use some K-V to bean re-mapper for example ObjectMapper from JASON Jackson library or commons beanutils offer some possibilities as well.
If for some reason you cannot use these libraries e.g. legal problem or simply you don’t find implementation which suits your needs then it is time for your implementation.
Following example implementation re-map to primitives and enums from string representation. Some highlights: java 1.6 doesn’t offer any way how to find the wrapper class for primitives, wrap any problem (Exception) to base RuntimeException is not a good approach. In case of real usage it is suggested to change this. In the context of this example I think it’s fajn.

public class KVBeanRemaper {
    private static final Map wrappers = new HashMap();

    static {
        wrappers.put(byte.class, Byte.class);
        wrappers.put(short.class, Short.class);
        wrappers.put(int.class, Integer.class);
        wrappers.put(long.class, Long.class);
        wrappers.put(float.class, Float.class);
        wrappers.put(double.class, Double.class);
        wrappers.put(boolean.class, Boolean.class);
    }

    public static  T remap(Map keyValue, final Class classMapTo) {

        final Set dataToMap = new HashSet(keyValue.keySet());
        final Field[] fields;

        T res;
        try {
            res = classMapTo.newInstance();
            fields = classMapTo.getDeclaredFields();

            for (Field f : fields) {
                if (!dataToMap.contains(f.getName())) {
                    continue;
                } else {
                    final String key = f.getName();

                    if (f.getType().isEnum()) {
                        findAccessMethod(true, f, classMapTo).invoke(res,
                                Enum.valueOf((Class) f.getType(), keyValue.get(key).toString().toUpperCase()));
                        dataToMap.remove(key);
                    } else if (wrappers.containsKey(f.getType()) || f.getType() == String.class) {
                        Class c = f.getType();
                        if (c.isPrimitive()) {
                            c = wrappers.get(c);
                        }
                        findAccessMethod(true, f, classMapTo).invoke(res, c.cast(keyValue.get(key)));
                        dataToMap.remove(key);
                    }
                }
            }
        } catch (Exception ex) {
            throw new RuntimeException("Error while remapping", ex);
        }
        if (dataToMap.size() > 0) {
            throw new RuntimeException("Complete fieldset hasn't been remapped");
        }
        return res;
    }

    private static Method findAccessMethod(boolean setter, final Field field, final Class klazz) throws IntrospectionException {
        PropertyDescriptor pd = new PropertyDescriptor(field.getName(), klazz);
        if (setter) {
            return pd.getWriteMethod();
        } else {
            return pd.getReadMethod();
        }
    }
}

Spring JAX-WS timeout

When building up a reliable predictable solution you need to manage a “response time” as one of the core principles. This is accomplished by implementing timeout policy. You don’t wanna clients hanging on connection forever. Nowadays very common approach to integration is taking advantage of Spring framework.

Spring JAX-WS web service proxies (JaxWsPortProxyFactoryBean) doesn’t offer a direct possibility to set a service timeout via one of their properties. Following lines documents one of the possibilities how to cope with that requirement.

Java implementation:

public class AbstractJaxWsPortProxyFactoryBean extends JaxWsPortProxyFactoryBean {

    public static final int CONNECT_TIMEOUT = 2500;

    public void setTimeout(final int timeout) {
        // JAX WS
        addCustomProperty("com.sun.xml.ws.connect.timeout", CONNECT_TIMEOUT);
        addCustomProperty("com.sun.xml.ws.request.timeout", timeout);
        // Sun JAX WS
        addCustomProperty("com.sun.xml.internal.ws.connect.timeout", CONNECT_TIMEOUT);
        addCustomProperty("com.sun.xml.internal.ws.request.timeout", timeout);
    }

    @Override
    public void afterPropertiesSet() {
        super.afterPropertiesSet();
    }
}

Spring configuration:


        Client

BPMS Lesson Learned

Developer’s experience

In the design phase, we focused overall architecture of solution using BPMS, in this part, we will focus on developer’s interaction with BPMS system, design tools, etc. It describes subjective experiences with ActiveVOS designer version 6.1. At the time of writing this article, ActiveVOS v. 9  was announced. I believe that a lot of stuff described in this article were enhanced and the product has made a great step ahead. Here are some key points which were recognized as a limitation.
 

Mitigate your expectation regarding BPMS designer IDE.

Designer is Eclipse-based IDE so for those who are familiar with Eclipse shouldn’t be a problem to start work with. Just expect that not all features are fully elaborated like code completion is completely missing. Xpath, XQuery error highlighting, code formatting (even tab ident is completely missing). Message transformation can be really painful from that point of view. It is good that these problems were at least partially addressed in future releases.

Team collaboration is a bit difficult

Not because of missing integration to a version control systems like SVN, CVS, etc. But just simply because of generated artefacts like deployment descriptors (pdd files), ant scripts they all contain absolute paths to files. What simply doesn’t work on different PC. Fortunately, this problem is easy to avoid by replacing this absolute path by relative ones.

Be prepared that some product feature is not reliable or doesn’t work

As nobody is perfect even ActiveVOS BPMS is not an exception. Just name those which we have to cope with:

    • eventing – On Weblogic application server running in the cluster this feature was not reliable.
    • instanceof – Xml processor used by ActiveVOS  ( Saxon library )  doesn’t support keyword instanceof used for element identification in the inheritance hierarchy.
    • time-out on asynchronous callback “receive” were no reliable – Once it time-out after 5 minutes (required 3 min), next time 1 hour, …

Conclusion

How stable feature did you find in given solution? Whould you suggest to change something that would have dramatic impact on developers experience? Share your thoughts in comment section bellow.

BPMS Lesson Learned – Design

Business process management system

This blog contribution tries to summarize experiences gained when designing BPMS solution using ActiveVOS v. 6.1. and highlight key design decesions. It describes the consequences of these decisions in the context of a bigger picture (impact on production, maintainability, day-to-day routines, …).  As the overall architecture is a set of design decisions which depends on a business context and objectives you are trying to achieve there is no simple copy-paste solution applicable in every situation. Let’s make long story short and have a look at those key areas:

BPMS is not a web flow framework!

It can be tempting to use a WS-HT interface for each interaction with a client/user. Beside the fact that you have to initiate somehow the interaction with the client ( create a human task) you are not aware of, it has a negative impact on DB space consumption. Every human task has several underlying processes consuming space  depending on a log level and persistence settings but moreover this doesn’t hold any business relevant info. In newer versions this is treated in a better way.

Think of your error handling strategy

Used implementation offers feature “suspend on fail” so in case of an error the process is put in suspended state and waiting for a manual interaction. The operator can replay the process. Be very careful with this feature. Overuse of this pattern leads to high database space consumption as a precondition of this feature is a full process logging and persistence enabled. More over what the operator can do with the failing process? In a “properly tested” system  this can happen due to the two main reasons: either data missing or technical reason like service is unavailable. In the first case do you really think that the operator will know your client’s  insurance number somewhere in the system? Certainly no. What about auditing in such a system, that’s another really important question. Fail over to human task seems like a reasonable solution. In the second case, implementation offers a feature called “retry policy”. Taking advantage of this feature you can achieve short-time outage immunity of the system. 

Structure your processes to smaller blocks

Although I can understand why people tend to design long processes, experiences prove the opposite. With one long process which realizes the whole business you’ll gain  readability but you’ll lose re-usability, maintainability  and more importantly scalability of the system. Not all the processes and sub-processes have the same level of importance at least from the technical perspective and realized business as well. You can use different levels of process/sub-process logging, persistence and polices such as retry policy. All this has a positive impact on lowering the database  consumption ratio, improving stability and robustness of the system. One important thing which we shouldn’t leave aside is spatial scalability. Every sub-process is just another web service so in case we need to improve throughput of the system we are free to setup a new instance and deploy those processes there. We are absolutely free from the point of infrastructure like load balancing, clustering, …  The only thing we need to keep in mind is that created human tasks are running within “BPEL engine” (no different component). The current version of implementation wasn’t able to read human tasks from more than one “BPEL engine” .

Sort your data

In every business domain there are some legal requirements such as auditing and archiving those information for a number of years. Naive approach I’ve met can be: “BPMS holds all the info”. That certainly does. But how the data are structured? Are all the data stored in BPMS relevant to business hence worth of archiving? BPMS is definitively keeping a lot of system information not relevant to business in his own internal structure. E.g. incoming messages as xml objects in variables. That means to get a specific info you would need to find that variable out and within xml object locate that specific piece of information. Moreover this information doesn’t have a time scope guaranteed. If the process doesn’t have full logging enabled then just the latest status is kept and no track of changes. The best approach is to solve this auditing requirements in advance. Store all your audit info into a separate DB schema in a well-defined readable structure. You save a significant amount of consumed DB space and you don’t have to create a “data mining” solution from the BPMS schema. No need to talk about expenses for additional disk capacity.

Reason over every feature used in the human task

Human tasks offer a lot of features which support interaction with users e.g. email, task escalation, task notification, etc. Especially pay attention to the task notification feature. Internal implementation is equal to a human task which simply doesn’t show up in the inbox as a new task but only as a notification. It was measured that one human task consumes roughly 1MB. Overuse of this feature can have a big impact on disk space consumed by the DB. The most dangerous thing about notifications is that underlaying system processes are in the running state until the user confirm delivery. Hence it is difficult to delete them from the console. Also associate the notification to the human task feature is missing so there is no other way to cope with the problem then a manual cancellation of the internal system process. This process can be really time-consuming when your system generates three thousand messages over the night.

Use “proxy back-end call pattern”

Every back-end system has his own domain model and message structure. It is really tedious and time-consuming to build messages on the BPMS side where the only tools available are Xpath, XQuery, etc. Proven by experience the better and more efficient approach is calling a “proxy method” which is responsible for building the message up, sending the message out and processing the result passed back to BPMS.

Conclusion

What is your experience with Business Process automation? Did you try any other system to automate it? Share your experience bellow in the comment section.