-
Notifications
You must be signed in to change notification settings - Fork 24
HowTo Create your own FrequencyDetector
The following tutorial assumes that you are familiar with the overall principles of aminer component development, which includes cloning and adapting the sources and re-installing the aminer. If this is the first time you develop aminer code, you may want to start with the HowTo: Create your own SequenceDetector wiki page that contains detailed explanations on the steps that are required to set up a new detector.
Log events usually occur within certain patterns. In many types of systems, especially those where mostly autonomous processes and few human actors are present, log events are expected to occur with more or less steady regularity. This means that we can count how often certain events appear in a specific time interval, and expect that in any interval of the same size a similar amount of log lines corresponding to that event type will occur. If at some point the number of these log lines increase or decrease drastically, we have to assume that something in the system has changed - an anomaly has occured. The FrequencyDetector outlined in this wiki is capable of performing this detection task automatically.
Consider the following Apache Access logs.
192.168.10.190 - - [29/Feb/2020:15:00:00 +0000] "GET /services/portal/ HTTP/1.1" 200 7499 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:00:25 +0000] "GET /nag/ HTTP/1.1" 200 8891 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:00:55 +0000] "POST /nag/task/save.php HTTP/1.1" 200 5220 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:01:12 +0000] "GET /services/portal/ HTTP/1.1" 200 1211 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:01:36 +0000] "GET /kronolith/ HTTP/1.1" 200 24703 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:01:43 +0000] "GET /services/portal/ HTTP/1.1" 200 3456 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:02:29 +0000] "GET /services/portal/ HTTP/1.1" 200 4526 "-" "-"
192.168.10.4 - - [29/Feb/2020:15:02:45 +0000] "GET /services/portal/ HTTP/1.1" 200 5326 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:03:01 +0000] "GET /nag/ HTTP/1.1" 200 1131 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:03:10 +0000] "POST /nag/task/save.php HTTP/1.1" 200 8754 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:03:48 +0000] "GET /services/portal/ HTTP/1.1" 200 1211 "-" "-"
192.168.10.4 - - [29/Feb/2020:15:03:50 +0000] "GET /nag/ HTTP/1.1" 200 6616 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:03:53 +0000] "GET /kronolith/ HTTP/1.1" 200 3452 "-" "-"
192.168.10.190 - - [29/Feb/2020:15:05:17 +0000] "GET /nag/ HTTP/1.1" 200 25623 "-" "-"
Most of the logs origin from host 192.168.10.190. Looking at the timestamps, it becomes apparent that in the first minute - 15:00:00 to 15:01:00 - three events occur. The same amount occurs in the time span 15:01:00 to 15:02:00. Then, only a single line from that host and one more line from host 192.168.10.4 occurs between 15:02:00 and 15:03:00. Next, 192.168.10.190 generates four lines between 15:03:00 and 15:04:00, followed by one more line afterwards. This is obviously a highly simplified example, but assuming that the steady behavior in the first two minutes with three lines per minute is considered normal behavior, we could expect this frequency to continue and be surprised to see only a single line or four lines per minute afterwards. We will use this data to test our detector, so store them in the file /home/ubuntu/apache.log.
Just as for the SequenceDetector, we start with an empty template that we store in the file /home/ubuntu/aminer/source/root/usr/lib/logdata-anomaly-miner/aminer/analysis/FrequencyDetector.py which looks as follows.
import time
import os
import logging
from aminer import AminerConfig
from aminer.AminerConfig import STAT_LEVEL, STAT_LOG_NAME
from aminer.AnalysisChild import AnalysisContext
from aminer.events.EventInterfaces import EventSourceInterface
from aminer.input.InputInterfaces import AtomHandlerInterface
from aminer.util.TimeTriggeredComponentInterface import TimeTriggeredComponentInterface
from aminer.util import PersistenceUtil
class FrequencyDetector(AtomHandlerInterface, TimeTriggeredComponentInterface, EventSourceInterface):
"""This class creates events when new value sequences were found."""
time_trigger_class = AnalysisContext.TIME_TRIGGER_CLASS_REALTIME
def __init__(self, aminer_config, target_path_list, anomaly_event_handlers, window_size=600, confidence_factor=0.5, persistence_id='Default',
learn_mode=False, output_logline=True):
"""Initialize the detector."""
self.target_path_list = target_path_list
self.anomaly_event_handlers = anomaly_event_handlers
self.learn_mode = learn_mode
self.next_persist_time = None
self.output_logline = output_logline
self.aminer_config = aminer_config
self.window_size = window_size
self.confidence_factor = confidence_factor
self.persistence_id = persistence_id
self.persistence_file_name = AminerConfig.build_persistence_file_name(aminer_config, self.__class__.__name__, persistence_id)
PersistenceUtil.add_persistable_component(self)
def receive_atom(self, log_atom):
"""Receive a log atom from a source."""
print('The log atom timestamp is ' + str(log_atom.atom_time))
def get_time_trigger_class(self):
"""
Get the trigger class this component should be registered for.
This trigger is used only for persistence, so real-time triggering is needed.
"""
return AnalysisContext.TIME_TRIGGER_CLASS_REALTIME
def do_timer(self, trigger_time):
"""Check current ruleset should be persisted."""
if self.next_persist_time is None:
return 600
delta = self.next_persist_time - trigger_time
if delta < 0:
self.do_persist()
delta = 600
return delta
def do_persist(self):
"""Immediately write persistence data to storage."""
pass
def allowlist_event(self):
"""Allowlist an event."""
pass
Note that we placed a single print statement in the receive_atom method, which processes every log line and outputs its timestamp.
Add the parameter definitions in the AnalysisNormalisationSchema.py file. We have two new parameters: (1) window_size, which defines the length of the time window where we want to count the events, and (2) confidence_factor, which we will use to determine whether we consider a deviation from the expected event frequency as either an anomaly that needs to be reported or a random fluctuation that is too weak to be of interest. Note that in later version of the AMiner (>= 2.3.1), these parameters already exist - just make sure that they are there and do not add anything.
'window_size': {'type': ['integer', 'float'], 'default': 600},
'confidence_factor': {'type': 'float', 'default': 0.33},
In the AnalysisValidationSchema.py, add the block of parameters for this detector as follows.
{
'id': {'type': 'string', 'nullable': True},
'type': {'type': 'string', 'allowed': ['FrequencyDetector'], 'required': True},
'paths': {'type': 'list', 'schema': {'type': 'string'}, 'nullable': True},
'window_size': {'type': ['integer', 'float'], 'min': 0.001},
'confidence_factor': {'type': 'float', 'min': 0, 'max': 1},
'persistence_id': {'type': 'string'},
'learn_mode': {'type': 'boolean'},
'output_logline': {'type': 'boolean'},
'output_event_handlers': {'type': 'list', 'nullable': True},
},
In the YamlConfig.py, the following code must be inserted in the section where all analysis components are placed:
elif item['type'].name == 'FrequencyDetector':
tmp_analyser = func(analysis_context.aminer_config, item['paths'], anomaly_event_handlers, persistence_id=item['persistence_id'],
window_size=item['window_size'], confidence_factor=item['confidence_factor'], learn_mode=learn,
output_logline=item['output_logline'])
We are then able to add the detector to our standard config.yml, which should look like this:
LearnMode: True # optional
LogResourceList:
- 'file:///home/ubuntu/apache.log'
Parser:
- id: 'START'
start: True
type: ApacheAccessModel
name: 'apache'
Input:
timestamp_paths: ["/accesslog/time"]
Analysis:
- type: "FrequencyDetector"
paths: ["/accesslog/host"]
window_size: 60
confidence_factor: 0.5
learn_mode: True
output_logline: True
EventHandlers:
- id: "stpe"
json: true
type: "StreamPrinterEventHandler"
Compared to the SequenceDetector, we use our newly created input test file /home/ubuntu/apache.log. Moreover, we define the FrequencyDetector component and set the observed path to /accesslog/host, so that we count the occurrences of each event per IP address as mentioned in the example above, set the window_size to 60 seconds, and the confidence_factor to 0.5. Note that it is important that the timestamp_paths parameter of the Input section contains the path to the DateTimeModelElement of the parser, otherwise the time of the log atoms is not correctly set.
As with the SequenceDetector, we install the aminer with the changed source code and then run it using the following commands (do not forget that this is required every time you want your changes to take effect):
root@user-1:/home/ubuntu/aminer# sudo ansible-playbook /home/ubuntu/aminer/playbook.yml
root@user-1:/home/ubuntu/aminer# aminer -C -f
The aminer will output the following:
The log atom timestamp is 1582988400
The log atom timestamp is 1582988425
The log atom timestamp is 1582988455
The log atom timestamp is 1582988472
The log atom timestamp is 1582988496
The log atom timestamp is 1582988503
The log atom timestamp is 1582988549
The log atom timestamp is 1582988565
The log atom timestamp is 1582988581
The log atom timestamp is 1582988590
The log atom timestamp is 1582988628
The log atom timestamp is 1582988630
The log atom timestamp is 1582988633
The log atom timestamp is 1582988717
As expected, we receive all the timestamps of all log lines in Unix format.
We can now make use of these timestamps to determine when the time interval window_size has passed. Note that it is recommended to use the timestamps extracted from the log lines rather than the real time, which could be accomplished using the do_timer method. The reason for this is that we want to analyze the logs in hindsight as if they were generated in live operation. Obviously, the aminer will process the lines much faster, i.e., the sample logs that span over multiple minutes are parsed in less than a second. Accordingly, forensic log analysis requires that log time rather than real time is employed.
For the implementation, we need a variable which keeps track when the time specified by window_size passed. Create the following variable in the __init__
method:
self.next_check_time = None
Then update the code in the receive_atom method as follows:
def receive_atom(self, log_atom):
print('The log atom timestamp is ' + str(log_atom.atom_time))
if self.next_check_time is None:
# First processed log atom, initialize next check time.
self.next_check_time = log_atom.atom_time + self.window_size
elif log_atom.atom_time >= self.next_check_time:
# Log atom exceeded next check time; time window is complete.
self.next_check_time = self.next_check_time + self.window_size
if log_atom.atom_time >= self.next_check_time:
print('Skipped at least one time window.')
self.next_check_time = log_atom.atom_time + self.window_size
else:
print(str(self.window_size) + ' seconds have passed...')
Initially, the variable self.next_check_time is None and is thus set to the timestamp of the first log atom plus the size of the time window, i.e., self.next_check_time marks the end of our monitored interval where we want to do the counting of events. In each subsequent run, we check whether the timestamp of the currently processed log event exceeds the end of that interval. If this is the case, we define the new interval by moving its end a period of window_size ahead. However, if this new ending time is still lower than the timestamp of the currently processed log atom, we know that we missed at least one interval, in which case we set the next_check_time to window_size ahead of the timestamp of the currently processed log atom. Run the aminer to obtain the following output:
The log atom timestamp is 1582988400
The log atom timestamp is 1582988425
The log atom timestamp is 1582988455
The log atom timestamp is 1582988472
60 seconds have passed...
The log atom timestamp is 1582988496
The log atom timestamp is 1582988503
The log atom timestamp is 1582988549
60 seconds have passed...
The log atom timestamp is 1582988565
The log atom timestamp is 1582988581
60 seconds have passed...
The log atom timestamp is 1582988590
The log atom timestamp is 1582988628
The log atom timestamp is 1582988630
The log atom timestamp is 1582988633
The log atom timestamp is 1582988717
Skipped at least one time window.
As expected, we get the output that 60 seconds have passed after the first log event that exceeds the interval length. At the end, we receive the message that at least one time window was skipped, which is reasonable since no log events occur between 15:04:00 and 15:05:00.
Now that we know our code works, let's remove the print statements and extend the code for counting events. The first part of the following code is identical to the code used in the SequenceDetector to extract the values occurring in paths of the target_path_list as tuples, which will define our log events. At the end of the code, we see a simple counter where the values of a dictionary increased for the corresponding log event key. For this, create the variable self.counts = {}
in the __init__
method.
def receive_atom(self, log_atom):
values = []
all_values_none = True
for path in self.target_path_list:
match = log_atom.parser_match.get_match_dictionary().get(path)
if match is None:
continue
if isinstance(match.match_object, bytes):
value = match.match_object.decode()
else:
value = str(match.match_object)
if value is not None:
all_values_none = False
values.append(value)
if all_values_none is True:
return
log_event = tuple(values)
if self.next_check_time is None:
# First processed log atom, initialize next check time.
self.next_check_time = log_atom.atom_time + self.window_size
elif log_atom.atom_time >= self.next_check_time:
# Log atom exceeded next check time; time window is complete.
self.next_check_time = self.next_check_time + self.window_size
if log_atom.atom_time >= self.next_check_time:
self.next_check_time = log_atom.atom_time + self.window_size
if log_event in self.counts:
self.counts[log_event] += 1
else:
self.counts[log_event] = 1
print(self.counts)
Running the aminer with this code yields the following output:
{('192.168.10.190',): 1}
{('192.168.10.190',): 2}
{('192.168.10.190',): 3}
{('192.168.10.190',): 4}
{('192.168.10.190',): 5}
{('192.168.10.190',): 6}
{('192.168.10.190',): 7}
{('192.168.10.190',): 7, ('192.168.10.4',): 1}
{('192.168.10.190',): 8, ('192.168.10.4',): 1}
{('192.168.10.190',): 9, ('192.168.10.4',): 1}
{('192.168.10.190',): 10, ('192.168.10.4',): 1}
{('192.168.10.190',): 10, ('192.168.10.4',): 2}
{('192.168.10.190',): 11, ('192.168.10.4',): 2}
{('192.168.10.190',): 12, ('192.168.10.4',): 2}
Each log event increases the respective counter. Cool! Now all we have to do is reset this counter after each completed time window. Also, we have to save the counts of the previous interval to be able to compare them with the current results. For this, create variable self.counts_prev = {}
in the __init__
method. The code that needs to be changed concerns the following section:
def receive_atom(self, log_atom):
...
if self.next_check_time is None:
# First processed log atom, initialize next check time.
self.next_check_time = log_atom.atom_time + self.window_size
elif log_atom.atom_time >= self.next_check_time:
# Log atom exceeded next check time; time window is complete.
self.next_check_time = self.next_check_time + self.window_size
if log_atom.atom_time >= self.next_check_time:
print('Skipped at least one time window.')
self.next_check_time = log_atom.atom_time + self.window_size
for log_ev in self.counts_prev:
# Get log event frequency in previous time window
occurrences = 0
if log_ev in self.counts:
occurrences = self.counts[log_ev]
# Compare log event frequency of previous and current time window
if occurrences < self.counts_prev[log_ev] * self.confidence_factor or occurrences > self.counts_prev[log_ev] / self.confidence_factor:
print(str(log_ev) + ' occurred ' + str(occurrences) + ' instead of ' + str(self.counts_prev[log_ev]) + ' times in the last ' + str(self.window_size) + ' seconds!')
if self.learn_mode is True:
self.counts_prev = self.counts
self.counts = {}
...
The code now goes over all log event counts of the previous time interval and compares them with the current count dictionary. In case that one of the current counts lies below or above the previous count of the same log event that is scaled by the confidence_factor, we print a message stating the exact number of occurrences. In other words, the range in which the measured frequency of a log event must lie in order to be considered non-anomalous is specified as [previous_frequency * confidence_factor, previous_frequency / confidence_factor]. We used 0.5 as a default value for confidence_factor, meaning that a ground truth frequency of 3 would result in an range of [1.5, 6] for expected frequency measurements. Higher values for confidence_factor will narrow down this range and make it easier to detect anomalies, however also increase the amount of false positives, and vice versa.
We also print a message when one or more time windows were skipped, since a count of 0 is never within the allowed range. Frequent occurrences of skipped time window messages indicate that the window_size is too short and should be adjusted by the user, since no stable behavior can be learned. Finally, if we want to adjust the count dictionary that is used for comparison continuously, i.e., self.learn_mode is set to True, we reset self.counts_prev. In any case, we reset the count dictionary for the next iteration. Running the aminer with this code yields the following output.
{('192.168.10.190',): 1}
{('192.168.10.190',): 2}
{('192.168.10.190',): 3}
{('192.168.10.190',): 1}
{('192.168.10.190',): 2}
{('192.168.10.190',): 3}
{('192.168.10.190',): 1}
{('192.168.10.190',): 1, ('192.168.10.4',): 1}
('192.168.10.190',) occurred 1 instead of 3 times in the last 60 seconds!
{('192.168.10.190',): 1}
{('192.168.10.190',): 2}
{('192.168.10.190',): 3}
{('192.168.10.190',): 3, ('192.168.10.4',): 1}
{('192.168.10.190',): 4, ('192.168.10.4',): 1}
Skipped at least one time window.
('192.168.10.190',) occurred 4 instead of 1 times in the last 60 seconds!
{('192.168.10.190',): 1}
As expected, the self.count variable is reset after each completed time window. First, the number of events by host 192.168.10.190 drop from 3 to 1 per minute between 15:02:00 and 15:03:00, causing an anomaly. Another anomaly occurs when no events occur in period 15:04:00 - 15:05:00. Only when the last event occurs, period 15:03:00 - 15:04:00 is evaluated and another anomaly informs the user that host 192.168.10.190 produced 4 events instead of 1, which was the last known event count from period 15:02:00 and 15:03:00.
Note that this code assumes that log lines are processed chronologically, i.e., it is not possible that a log event is received with a timestamp that lies before one of the previously received log atoms. Accordingly, this detector only has limited applicability in log files where time stamp order is not guaranteed. One possibility to resolve this issue is to keep a backlog of log lines stored in the detector, and only process them with a certain time delay. For simplicity, we do not consider this case in this wiki and assume that log lines arrive chronologically.
Another downside of the current implementation is that anomalies are only produced when a log event is received that exceeds the end of the current time interval. This mainly causes a small delay in detection, but means that no anomalies are produced if no log events are received anymore at some point, e.g., because the logging infrastructure was stopped as part of an attack. We could resolve this issue by doing regular checks in the do_timer method, however, for the purpose of this wiki we assume that there are other detectors in place that are capable of detecting such severe intrusions.
We already know from the SequenceDetector that proper aminer output is generated by passing event data to the anomaly handlers. The following code passes anomaly event information for both cases where frequency deviations are too strong as well as time windows are skipped. Note that in both cases, we provide a different amount of information: While we only have the affected paths available when time windows are skipped, we can provide the expected and actual frequencies in the json object when frequency anomalies are detected. In addition, we compute a confidence score that informs users about the severity of the deviation, i.e., stronger deviations have a larger confidence score.
def receive_atom(self, log_atom):
...
if self.next_check_time is None:
# First processed log atom, initialize next check time.
self.next_check_time = log_atom.atom_time + self.window_size
elif log_atom.atom_time >= self.next_check_time:
# Log atom exceeded next check time; time window is complete.
self.next_check_time = self.next_check_time + self.window_size
if log_atom.atom_time >= self.next_check_time:
self.next_check_time = log_atom.atom_time + self.window_size
analysis_component = {'AffectedLogAtomPaths': [self.target_path_list]}
event_data = {'AnalysisComponent': analysis_component}
for listener in self.anomaly_event_handlers:
listener.receive_event('Analysis.%s' % self.__class__.__name__, 'No log events received in time window',
[''], event_data, log_atom, self)
for log_ev in self.counts_prev:
# Get log event frequency in previous time window
occurrences = 0
if log_ev in self.counts:
occurrences = self.counts[log_ev]
# Compare log event frequency of previous and current time window
if occurrences < self.counts_prev[log_ev] * self.confidence_factor or occurrences > self.counts_prev[log_ev] / self.confidence_factor:
if self.output_logline:
sorted_log_lines = [log_atom.parser_match.match_element.annotate_match('')]
else:
sorted_log_lines = [log_atom.raw_data]
confidence = 1 - min(occurrences, self.counts_prev[log_ev]) / max(occurrences, self.counts_prev[log_ev])
analysis_component = {'AffectedLogAtomPaths': self.target_path_list, 'AffectedLogAtomValues': list(log_ev)}
frequency_info = {'ExpectedLogAtomValuesFrequency': self.counts_prev[log_ev], 'LogAtomValuesFrequency': occurrences,
'ConfidenceFactor': self.confidence_factor,
'Confidence': confidence}
event_data = {'AnalysisComponent': analysis_component, 'FrequencyData': frequency_info}
for listener in self.anomaly_event_handlers:
listener.receive_event('Analysis.%s' % self.__class__.__name__, 'Frequency anomaly detected', sorted_log_lines, event_data, log_atom, self)
if self.learn_mode is True:
self.counts_prev = self.counts
self.counts = {}
...
Running the aminer yields proper json-formatted anomalies. The following examples shows one of them, where the frequency of a particular log event increased from 1 to 4 corresponding to the examples mentioned above.
{
"AnalysisComponent": {
"AnalysisComponentIdentifier": 2,
"AnalysisComponentType": "FrequencyDetector",
"AnalysisComponentName": "FrequencyDetector2",
"Message": "Frequency anomaly detected",
"PersistenceFileName": "Default",
"TrainingMode": true,
"AffectedLogAtomPaths": [
"/accesslog/host"
],
"AffectedLogAtomValues": [
"192.168.10.190"
],
"LogResource": "file:///tmp/access.log"
},
"FrequencyData": {
"ExpectedLogAtomValuesFrequency": 1,
"LogAtomValuesFrequency": 4,
"ConfidenceFactor": 0.5,
"Confidence": 0.75
},
"LogData": {
"RawLogData": [
"192.168.10.190 - - [29/Feb/2020:15:05:17 +0000] \"GET /nag/ HTTP/1.1\" 200 25623 \"-\" \"-\""
],
"Timestamps": [
1582988717
],
"DetectionTimestamp": 1668207249.71,
"LogLinesCount": 1,
"AnnotatedMatchElement": {
"/accesslog": "192.168.10.190 - - [29/Feb/2020:15:05:17 +0000] \"GET /nag/ HTTP/1.1\" 200 25623 \"-\" \"-\"",
"/accesslog/host": "192.168.10.190",
"/accesslog/sp0": " ",
"/accesslog/ident": "-",
"/accesslog/sp1": " ",
"/accesslog/user": "-",
"/accesslog/sp2": " ",
"/accesslog/time": "1582988717",
"/accesslog/sp3": "] \"",
"/accesslog/fm/request": "GET /nag/ HTTP/1.1",
"/accesslog/sp6": "\" ",
"/accesslog/status": "200",
"/accesslog/sp7": " ",
"/accesslog/size": "25623",
"/accesslog/combined": " \"-\" \"-\"",
"/accesslog/combined/combined": " \"-\" \"-\"",
"/accesslog/combined/combined/sp9": " \"",
"/accesslog/combined/combined/referer": "-",
"/accesslog/combined/combined/sp10": "\" \"",
"/accesslog/combined/combined/user_agent": "-",
"/accesslog/combined/combined/sp11": "\"",
"/accesslog/fm/request/method": "0",
"/accesslog/fm/request/sp5": " ",
"/accesslog/fm/request/request": "/nag/",
"/accesslog/fm/request/sp6": " ",
"/accesslog/fm/request/version": "HTTP/1.1"
}
}
}
Just as the SequenceDetector, the FrequencyDetector is designed to work continuously, i.e., it will learn the frequencies of log events on the fly and use them for detection. However, it would be convenient to set a ground truth for stable systems that is known to represent the expected number of events to avoid that incorrect frequencies are learned and used for comparison. We already implemented this using the self.learn_mode, however, we still have to implement the code that loads the ground truth from disk. We first add the following line in the do_persist method:
def do_persist(self):
"""Immediately write persistence data to storage."""
persist_data = []
for log_ev, freq in self.counts_prev.items():
persist_data.append((log_ev, freq))
PersistenceUtil.store_json(self.persistence_file_name, persist_data)
self.next_persist_time = None
This code iterates through the log events and their respective frequencies and stores them in lists rather than dictionaries, which is required to store them. We use the self.counts_prev dictionary to ensure that we use the counts of a complete time window, since self.counts is constantly being updated and it is not guaranteed that the aminer calls the do_persist method at the end of a time window. Running the aminer with this code stores the last known frequencies to disk. You can review the persistence file as follows:
root@user-1:/home/ubuntu/aminer# sudo cat /var/lib/aminer/FrequencyDetector/Default
[[["string:192.168.10.190"], 4], [["string:192.168.10.4"], 1]]
As expected, we obtain list of log events defined by the target_path_list and their frequencies. Now we only have to load them from disk in the __init__
method as follows.
persistence_data = PersistenceUtil.load_json(self.persistence_file_name)
if persistence_data is not None:
for entry in persistence_data:
log_event = entry[0]
frequency = entry[1]
self.counts_prev[tuple(log_event)] = frequency
Try to run the aminer again and check the output. Since the persisted data defines that one event of host 192.168.10.4 should occur per minute, it should result in an additional anomaly generated in the first minute since no event of that host occurs there. Note that you must not use the -C flag when starting the AMiner if you want to load the data from the persistency, since -C will clear all learned data before starting the AMiner!
It is planned that a component called EventFrequencyDetector that largely implements the features described in this wiki is released with aminer version 2.3.0. Check it out to compare your solution with our detector! If you made some improvements or come up with your own detector that is useful for a common type of log data, the aminer team would be happy to hear from you - just contact us at aecid@ait.ac.at