You Are Here: Home » Apache Spark » Alarm Flooding Control with Event Clustering Using Spark Streaming | Mawazo

Alarm Flooding Control with Event Clustering Using Spark Streaming | Mawazo

You show up at work in the morning and open your email to find 100 alarm emails in your inbox for the same error from an application running on some server within a short time window of 1 minute. You are off to to bad start, struggling to find other emails. I was motivated by this unpleasant experience to come up with a solution to stop the deluge of the same alarm emails in a small time window.

When there is a burst of events it’s essentially a cluster on the temporal dimension. If we can identify the clusters from the real time stream of events, then we can send only one or few alarms per cluster, instead of of one alarm per event. If the cluster extends over an long  period, we could send multiple alarms.

I have implemented the solution in Spark Streaming and it’s available in my OSS projectruscello in github.  The core stream processing algorithms are implemented  in plain Java in another OSS project of mine called hoidla.  So the solution can easily be implemented in any other streaming platform e.g. Storm or Flink.

Event Burst is a Temporal Cluster

As I alluded to it earlier, a burst of events is a essentially a cluster in the temporal dimension. A cluster is characterized by the following two properties.

  1. Data points within a cluster are maximally cohesive, i.e. they are close to each other. For time series data, cohesiveness is defined by the gap between successive data points.
  2. Data points within a cluster are maximally separated from data points in other clusters. Cluster separation is not going to be of much concern for us.

Time gap between successive events is an important parameters. We will detect cluster using various statistics based on  time gap. The various algorithms are listed below. All computation are done for a window with a specified time span.

Number of occurrences Number of occurrences above threshold
Average interval Average interval below threshold
Maximum interval Maximum interval below threshold

The first two conditions are essentially same, expressed in different ways. The conditions can be joined together conjunctively (AND) or disjunctively (OR). A score is returned based on the evaluation of the cluster conditions.

The third condition implicitly applies an upper bound on the variance of time gap. It is better to actually use variance or standard deviation. I will make that enhancement when I get a chance.

When the conditions  joined conjunctively, weights can be specified as an option for each condition and an weighted score can be calculated.

The decision to generate an alarm is based on the score.  An alarm is always generated at the onset of an event cluster. How many additional  alarms will be generated for an event cluster depends on 2 parameters, depending on whether the event stream is in a cluster mode or not.

Any event stream at any timeis in any of the two modes as below. It’s in cluster mode when the cluster conditions are met.

  1. Cluster
  2. Non cluster

In cluster mode, alarm generation is controlled by the window sliding step. The time bound window has a sliding step specified  through the parametereventCluster.window.timeStep . Computation is performed when the window is full i.e. every sliding step interval. For example, If we have have an widow spanning 20 sec with a sliding interval of 10 sec, cluster score computation will be performed every 10 sec.

In the non cluster mode, alarm generation is controlled through the  parameter  eventCluster.window.minEventTimeInterval. The time gap between successive alarms will not be less than the specified time interval.

Through these two parameters,  alarm flooding  can be controlled by reducing the number of alarms significantly.

Application Events

We are considering events based on server logs of some application. It is assumed that the server logs are pre processed to generate records with the following content. Error code and time stamp are generally in logs. Logs may include host name also.

Host ID Host name or IP address for the server
Application ID Unique ID for the application running on server
Error code Error code from the server log
Time stamp Time stamp corresponding to the error event

The first 3 fields constitute the key in spark processing. For each unique combination of (host ID, application ID, error code) we want to process the event stream and detect event  cluster.

Spark Streaming

The implementation is in the scala object EventCluster. It can take input stream from the following sources. Choice can be made through a configuration parameter.

  1. Kafka
  2. HDFS
  3. Socket

I have used text socket source for my testing. The implementation relies on Spark Streaming state management and uses the  mapWithState() function. For each unique (host ID, application ID, error code) there is a time bound window that is part of the Spark Streaming state. HDFS is used for state checkpointing.

A natural question that might come up is why window functions offered by Spark is not being used. They are just too simplistic and does not meet the requirements for this use case. I have used time bound window from hoidla, which is  a java library for real time streaming application. It provides the following features.

  • Various types of windows and associated processing algorithms
  • Various approximate aggregation algorithms

Here is some output for alarm. This test was done with a stream of 500 events, consisting of multiple event clusters. This output fragment shows a cluster involving the host, application and error combination of (50ZQTJZM,JZ7CD5,60035). Alarms are generated about 5 sec apart, because window slide step is configured to be 5 sec.


Here is some output fragment in non clustered ambient state of event generation. The alarms are further apart as determined by minimum alarm time interval parameter.


The number of alarms from this run is  less than 50, which translates to a 90% reduction in the number of alarms.  Without the alarm controlling logic, there would have been one alarm per event. The number of alarms can be reduced further by appropriately controlling the two configuration parameters I alluded to earlier.

It’s conceivable, that you want to control the alarm generation rate depending on the application and error code. One way to do that would be to make these configuration parameters application and error code dependent, instead of global.

Although I have used error code as part of the key, you could just use host ID and application ID as the key if you don’t care about the error codes.

Event Stream Generation

I have used a python script to generate events. It also serves as a socket server, serving the events to Spark Streaming. Here is some console output of the script.

connected with
connected with
connected with
num of messages sent: 1
connected with
starting cluster
connected with
num of messages sent: 4
connected with
num of messages sent: 8
connected with
num of messages sent: 7
connected with
num of messages sent: 8
connected with
num of messages sent: 8
connected with
num of messages sent: 8
connected with
num of messages sent: 7

It shows the number of messages sent to spark, when the Spark receiver calls. In this output we see an event cluster formation, as the number of messages sent increases  sharply.

Other Use Cases

This solution can be used for many other similar use cases. For example, you may have an IoT application, where you want to detect outlier in the sensor data and trigger alarms, when outliers are detected.

You are up against the same problem and could adopt the same solution. If the outlier events consist of (sensor ID, timestamp). you could use the sensor ID as the key. A separate window will be created and managed for each sensor.

Realtime Streaming Library

Following blogs of mine provide more details on the capabilities of hoidla. I have implement  various Storm and Spark Streaming based realtime use cases, leveraging stream processing library in hoidla.

Since it’s a plain java library, it can be used from any application written in JVM language, whether it’s running under Spark Streaming, Flink or Storm.

Wrapping Up

We have gone through a simple and intuitive solution for controlling alarm flooding using Spark Streaming. Here is the tutorial document to run this use case. Here is theconfiguration file used for this use case.

Source: Alarm Flooding Control with Event Clustering Using Spark Streaming | Mawazo

About The Author

Number of Entries : 418

2015 © Big Data Cloud Inc. All Rights Reserved.

Hadoop and the Hadoop elephant logo, Sprark are trademarks of the Apache Software Foundation.

Scroll to top