Skip navigation

Jive Release Blog

1 Post authored by: aron.racho

The same practices for performance testing other enterprise applications also apply when testing Clearspace. Here is a 10,000 ft overview:


(1) Build a test plan.

In it, determine the hardware/software characteristics of your test platform, which should be logically scaled down from the target production environment (if it isn't the target production environment). Similarly, plan the size of the community you are modeling, considering factors such as size of existing dataset (users, blogs, documents, messages, threads) and the number of concurrently active users you are going to simulate. Also crucial is to determine the usage model for your installation, which is typically the mixture of activities (read documents, publish documents, read messages, login, etc.) these users are going to perform, as well as the rate of activities, and the amount of delay between the activities. Finally, decide what the performance acceptability criteria are (for example, all activities must complete under 5 seconds or less). 


(2) Tune the test environment.

The target environment should be tuned to be as performant as possible. One way is to  subject the environment to a reliable amount of load, tweaking the various components so that bottlenecks are discovered and mitigated. This load should model the 'average' amount of traffic you expect from production (or some scaled down amount).


Some of the usual bottleneck suspects are the number and size of worker threads available to the webserver and application servers; database caches and connection parameters; database connection pool size; interference from outside network traffic; and others. 


Tuning the clearspace caches are of special importance in this step. Cache efficiency is the key here -- no caches should be marked inefficient during your tuning efforts. You must increase the default caches until no cache is marked inefficient; otherwise your database will be hammered, and overall performance will suffer.


(3) Subject the environment to increasing amounts of load.

Once your environment is tuned to average load, systematically subject it to larger and larger amounts of load, measuring the impacts on user experience and environment resources. Typically, the impact of increasing amounts of load would be measured against a single 'benchmark' user; the other 'load generating' users may or may not be measured.


A general rule of thumb is that if any physical server exceeds 80% utilization, or memory starts swapping, or if any of the performance acceptibility criteria (aka SLAs) fail, the system has reached saturation. By the end of the exercise, you should have a good idea of how much beyond the 'average' amount of load your environment can handle under the version of Clearspace you are running.


Some miscellaneous tips:

  • Try variations of your test, and compare against the baseline. Once you get a baseline for how your environment works under one usage model, see if performance changes under different use models (i.e. try making your tests more read heavy if they are write-heavy); vary the database, to see if the type of database makes a difference; vary the clustering -- see if adding more nodes under your levels of load improves performance.

  • Make sure that your load generating applications as much as possible do not artificially inflate reports of performance degradation -- some load testing tools may slow down in their ability to report accurately as loads increase. Take a distributed approach to load generation rather than a single monolithic load generator, especially for high loads.

  • You should typically have the tester and target environment on the same subnet.


Happy testing!

Filter Blog

By date: By tag: