The objective of performance testing BOSS is :
* to understand the throughput that can be sustained * to investigate performance behaviour for simple and complex workflows and data sizes * to allow for basic performance tuning of various configurations * (eventually) as a basis for investigating behaviour under various failure scenarios.
We should have mechanisms to:
* launch processes * monitor participants * monitor the engine * analyse results
The launch system should support:
* various rates * configurable launch process and start condition * (eventually) replay-like capability
The participant monitor logs when a participant is started and when it responds.
The engine monitor logs when processes start, when participants are called/respond and when a process ends.
The analysis needs to answer questions related to the objectives; initially this can be very basic.
BOSS may use native, amqp or a mix of participants.
Most of this should be done in ruby and should fit into the ruby spec system. The amqp participants would probably be in python.
Progress updates are noted in the status page