Internet-Draft | network-anomaly-lifecycle | November 2024 |
Riccobene, et al. | Expires 7 May 2025 | [Page] |
Network Anomaly Detection is the act of detecting problems in the network. Accurately detect problems is very challenging for network operators in production networks. Good results require a lot of expertise and knowledge around both the implied network technologies and the connectivity services provided to customers, apart from a proper monitoring infrastructure. In order to facilitate network anomaly detection, novel techniques are being introduced, including programmatical, rule-based and AI-based, with the promise of improving scalability and the hope to keep a high detection accuracy. To guarantee acceptable results, the process needs to be properly designed, adopting well-defined stages to accurately collect evidence of anomalies, validate their relevancy and improve the detection systems over time, iteratively.¶
This document describes a well-defined approach on managing the lifecycle process of a network anomaly detection system, spanning across the recording of its output and its iterative refinement, in order to facilitate network engineers to interact with the network anomaly detection system, enable the "human-in-the-loop" paradigm and refine the detection abilities over time. The major contributions of this document are: the definition of three key stages of the lifecycle process, the definition of a state machine for each anomaly annotation on the system and the definition of YANG data models describing a comprehensive format for the anomaly labels, allowing a well-structured exchange of those between all the interested actors.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 7 May 2025.¶
Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
This note is to be removed before publishing as an RFC.¶
Discussion of this document takes place on the Network Management and Operations Area Working Group Working Group mailing list ([email protected]), which is archived at https://mailarchive.ietf.org/arch/browse/nmop/.¶
Source for this draft and an issue tracker can be found at https://github.com/network-analytics/draft-netana-nmop-network- anomaly-lifecycle.¶
This document is experimental. The main goal of this document is to propose an iterative lifecycle process to network anomaly detection by proposing a data model for metadata to be addressed at different lifecycle stages.¶
The experiment consists of verifying whether the approach is usable in real use case scenarios to support proper refinement and adjustments of network anomaly detection algorithms. The experiment can be deemed successful if validated at least with an open-source implementation successfully applied with real networks.¶
In [I-D.ietf-nmop-terminology] a network anomaly is defined as "an unusual or unexpected event or pattern in network data in the forwarding plane, control plane, or management plane that deviates from the normal, expected behavior".¶
A network problem is defined as "a state regarded as undesirable and may require remedial action" (see [I-D.ietf-nmop-terminology]).¶
The main objective of a network anomaly detection system is to identify Relevant States of the network (defined as states that have relevancy for network operators, according to [I-D.ietf-nmop-terminology] ), as those are states that could lead to problems or might be clear indications of problem already happening.¶
It is still remarkably difficult to gain a full understanding and a complete perspective of "if" and "how" a relevant state is actually an indication of a problem or it is just unexpected, but has no impact on services and end users. Providers of solutions for network anomaly detection should aim at increasing accuracy, by minimizing false positives and false negatives. Moreover, the behaviour of the network naturally changes over time, when more connectivity services are deployed, more customers on-boarded, devices are upgraded or replaced, and therefore it is almost impossible to identify anomaly detection techniquest that can keep working accurately over time, without changing the detection criterias (or methodologies) over time.¶
This opens up to the necessity of further validating notified relevant states to check if a detected symptom is actually impacting connectivity services: this might require different actors (both human and algorithmic) to act during the process and refine their understanding across the network anomaly lifecycle.¶
Finally, once validation has happened, this might lead to refinements to the logic that is used by the detection, so that this process can improve the detection accuracy over time.¶
Performing network anomaly detection is a process that requires a continuous learning and continuous improvement. Relevant states are detected by aggregating and understanding Symptoms, then validated, confirming that Symptoms actually impacted connectivity services impacting and eventually need to be further analyzed by performing postmortem analysis to identify any potential adjustment to improve the detection capability. Each of these steps represents an opportunity to learn and refine the process, and since implementations of these steps might also be provided by different parties and/or products, this document also contributes a formal data model to capture and exchange Symptom information across the lifecycle.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
This document makes use of the terms defined in [I-D.ietf-nmop-terminology].¶
The following terms are used as defined in [RFC9417].¶
The following terms are defined in this document.¶
Annotator: Is a human or an algorithm which produces metadata by describing anomalies with Symptoms.¶
False Positive: Is a detected anomaly which has been identified during the postmortem to be not anomalous.¶
False Negative: Is anomalous but has not been identified by the anomaly detection system.¶
The above definitions of network problem provide the scope for what to be looking for when detecting network anomalies. Concepts like "desirable state" and "required state" are introduced. This poses the attention on a significant problem that network operators have to face: the definition of what is to be considered "desirable" or "undesirable". It is not always easy to detect if a network is operating in an undesired state at a given point in time. To approach this, network operators can rely on different methodologies, more or less deterministic and more or less sensitive: on the one side, the definition of intents (including Service Level Objectives and Service Level Agreements) which approaches the problem top-down; on the other side, the definition of Symptoms, by mean of solutions like SAIN [RFC9417], [RFC9418] and [I-D.ietf-nmop-network-anomaly-architecture], which approaches the problem bottom-up. At the center of these approaches, there are the so-called Symptoms, explaining what is not working as expected in the network, sometimes also providing hints towards issues and their causes.¶
One of the more deterministic approaches is to rely on Symptoms based on measurable service-based KPIs, for example, by using Service Level Indicators, Objectives and Agreements ([RFC9543]). This is the case when rules on SLOs and SLIs are manually defined once and the used afterwards for detection at runtime.¶
However, defining SLOs in a "static way" can bring some challenges as well, related to the dynamic nature of networks and services.¶
Alternative methodologies rely on a more "relaxed" approach to detect symptoms and their impact to services as a way to generate analytical data out of operational data. For instance:¶
In general, defining boundaries between desirable vs. undesirable in an accurate fashion requires continuous iterations and improvements coming from all the stages of the network anomaly detection lifecycle, by which network engineers can transfer what they learn through the process into new Symptom definitions and, ultimately, into refinements of the detection algorithms.¶
The lifecycle of a network anomaly can be articulated in three phases, structured as a loop: Detection, Validation, Refinement.¶
Each of these phases can either be performed by a network expert or an algorithm or complementing each other.¶
The network anomaly metadata is generated by an annotator, which can be either a human expert or an algorithm. The annotator can produce the metadata for a network anomaly, for each stage of the cycle and even multiple versions for the same stage. In each version of the network anomaly metadata, the annotator indicates the list of Symptoms that are part of the network anomaly taken into account. The iterative process is about the identification of the right set of Symptoms.¶
The Network Anomaly Detection stage is about the continuous monitoring of the network through Network Telemetry [RFC9232] and the identification of Symptoms. One of the main requirements that operator have on network anomaly detection systems is the high accuracy. This means having a small number of false negatives, Symptoms causing connectivity service impact are not missed, and false positives, Symptoms that are actually innocuous are not picked up.¶
As the detection stage is becoming more and more automated for production networks, the identified Symptoms might point towards three potential kinds of behaviors:¶
i. those that are surely corresponding to an impact on connectivity services, (e.g. the breach of an SLO),¶
ii. those that will cause problems in the future (e.g. rising trends on a timeseries metric hitting towards saturation),¶
iii. those or which the impact to connectivity services cannot be confirmed (e.g. sudden increase/decrease of timeseries metrics, anomalous amounts of log entries, etc.).¶
The first category requires immediate intervention (a.k.a. the problem is "confirmed"), the second one provides pointers towards early signs of an problem potentially happening in the near future (a.k.a. the problem is "forecasted"), and the third one requires some analysis to confirm if the detected Symptom requires any attention or immediate intervention (a.k.a. the problem is "potential"). As part of the iterative improvement required in this stage, one that is very relevant is the gradual conversion of the third category into one of the first two, which would make the network anomaly detection system more deterministic. The main objective is to reduce uncertainty around the raised alarms by refining the detection algorithms. This can be achieved by either generating new Symptom definitions, adjusting the weights of automated algorithms or other similar approaches.¶
The key objective for the validation stage is clearly to decide if the detected Symptoms are signaling a real problem (a.k.a. requires action) or if they are to be treated as false positives (a.k.a. suppressing the alarm). For those Symptoms surely having impact on connectivity services, 100% confidence on the fact that a network problem is happening can be assumed. For the other two categories, "forecasted" and "potential", further analysis and validation is required.¶
After validation of a problem, the service provider performs troubleshooting and resolution of the problem. Although the network might be back in a desired state at this point, network operators can perform detailed postmortem analysis of network problems with the objective to identify useful adjustments to the prevention and detection mechanisms (for instance improving or extending the definition of SLIs and SLOs, refining concern/impact scores, etc.), and improving the accuracy of the validation stage (e.g. automating parts of the validation, implementing automated root cause analysis and automation for remediation actions). In this stage of the lifecycle it is assumed that the problem is under analysis.¶
After the adjustments are performed to the network anomaly detection methods, the cycle starts again, by "replaying" the network anomaly and checking if there is any measurable improvement in the ability to detect problems by using the updated method.¶
The information that is produced at each stage needs to be persisted and retrieved to perform the network anomaly lifecycle. The lifecycle begins with the detector notifying anomalies to the "Alarm and Problem Management System" and to the "Post-mortem System" according to (see [I-D.ietf-nmop-network-anomaly-architecture]). In this case the Post-mortem system is identified as the Label Store. Once the notification arrives to the Label Store, the anomaly label is persisted. In the following stages (i.e. validation and refinement), the information about the labels are retrieved, reviewd, modified and persisted again, generating every time a new version of the same annotation, or tagging the annotation as irrelevant, if it would be necessary to remove it.¶
In the following sections, the following are defined: * a state machine for a label * a YANG data model for the notification sent by the Detector to the Label Store * a YANG data model to the define the interrogation (and retrieval) of the labels from the label store.¶
In the context of this document, from a network anomaly detection point of view a network problem is defined as a collection of interrelated Symptoms, as specified in [I-D.netana-nmop-network-anomaly-semantics].¶
The understanding of a network problem can change over time. Moreover, multiple actors are involved in the process of refining this understanding in the different phases.¶
From this perspective, a problem can be refined according to the following states (Figure 2).¶
The knowledge gained at each stage is codified as a list of anomaly labels that can be stored on a Label Store ( see Section 10.1 for a reference).¶
The data model provides support for "human-in-the-loop", allowing for network experts to validate and adjust network anomaly labels and detection systems. An example of human-in-the-loop has been demonstrated with Antagonist [Antagonist], by building a User Interface that interacts with an API based on this data model.¶
The base for the modules is the relevant-state data model. Relevant state is at the root of the data model, with its parameters (ID, description, start-time, end-time) and a collection of anomalies. This allows the relevant state to be considered as a container of anomalies.¶
Each anomaly is characterized by some intrinsic fields (such as id, version, state, description, start-time, end-time, confidence score and pattern) Particularly the confidence score is a measure of how confident was the detector in considering the given anomaly as an anomalous behaviour.¶
Each anomaly also include the symptom and the service container. These containers are placeholders to represent the information about the symptom (what is exaclty happening as anomalous behaviour) and the connectivity service (what entity is affected by the anomaly). In particular, for what concerns the symptom, a concern score is defined as necessary field, which has the meaning of expressing how much the anomaly is impacting connectivity services. In case additional information related to the symptom and to the service need to be provided, augmentation would be the appropriate intended mechanism to do so. An example of this is provided in [I-D.netana-nmop-network-anomaly-semantics], where an augmentation of both symptom and service is provided for the specific case of anoamly labels related to connectivity services.¶
Also a list of various actors that can be involved in the process is presented as following:¶
The data model that has been defined is used in two YANG modules: the relevant-state-notification and the ietf-relevant-state: the notification is primarily used by the Network Anomaly Detector, to notify the "Alarm and Problem Management System" and the "Post-mortem System" (see [I-D.ietf-nmop-network-anomaly-architecture]); the container instead is used inside the Post-mortem system to exhance anomaly detection lables between the anomaly detection stages defined above (validation, refinement, detection).¶
This section provides pointers to existing open source implementations of this draft. Note to the RFC-editor: Please remove this before publishing.¶
An open-source implementation for this draft is called AnTagOnIst (Anomaly Tagging On hIstorical data), and it has been implemented in order to validate the application of the YANG model defined in this draft. Antagonist provides visual support for two important use cases in the scope of this document:¶
The open-source code can be found here: [Antagonist]¶
As part of the experiment that was conducted with AnTagOnIst, Some main Use Case scenarios have been validated so far:¶
The security considerations will have to be updated according to "https://wiki.ietf.org/group/ops/yang-security-guidelines".¶
The authors would like to thank xxx for their review and valuable comments.¶