A security analyst can configure and leverage detection rules that infer the normal statistical behaviour of its information system to detect abnormal behaviours.
We have an experimental Anomaly Detection feature that is now available in production behind a feature flag.
The first real world experimentation with this detection engine have been disappointing, mainly because the predictions were not good enough. Regular levels are therefore often considered as anomalies, which results on too many false positive alerts from our demanding operational perspective. For this reason, it was decided not to release the feature for now.
The proposed solution is to replace the current prediction library by a new implementation providing better results. The new implementation should make sure that usual aggregation levels cannot be flagged as anomalies.
(edited)
0
georges_bossert
We have successfully updated the anomaly detection feature prediction engine.
The fixed issues are:
Signal preprocessing to not only take the raw points (resampling, missing data, ...)
Drastic reduction in the number of false positives (improvement of the quality of the prediction + adjustment of the sensitivity of the prediction)
Consistent seasonality with weekday versus weekend activity
Improved the test to determine if the selected signal is predictable or not
The current prediction engine is based on two predictive models:
a model to predict the most likely value for a given datetime
a model to predict the normal scatter around the most likely value for a given datetime.
In the following days, the QA team will fix the latest small issues in the UI/UX of this feature.
We expect this last cleaning operation to be performed in the following weeks.
We have an experimental Anomaly Detection feature that is now available in production behind a feature flag.
The first real world experimentation with this detection engine have been disappointing, mainly because the predictions were not good enough. Regular levels are therefore often considered as anomalies, which results on too many false positive alerts from our demanding operational perspective. For this reason, it was decided not to release the feature for now.
The proposed solution is to replace the current prediction library by a new implementation providing better results. The new implementation should make sure that usual aggregation levels cannot be flagged as anomalies.
We have successfully updated the anomaly detection feature prediction engine.
The fixed issues are:
The current prediction engine is based on two predictive models:
In the following days, the QA team will fix the latest small issues in the UI/UX of this feature.
We expect this last cleaning operation to be performed in the following weeks.
Find below few examples of our new engine.
Very interesting feature.