Internet-Draft | Network Slice Service YANG Model | August 2024 |
Wu, et al. | Expires 1 March 2025 | [Page] |
This document defines a YANG data model for RFC 9543 Network Slice Service. The model can be used in the Network Slice Service interface between a customer and a provider that offers RFC 9543 Network Slice Services.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 1 March 2025.¶
Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
[RFC9543] discusses a framework and interface for Network Slice using IETF technologies. The Network Slice Services may be referred to as RFC 9543 Network Slice Services. In this document, we simply use the term "Network Slice Service" to refer to this concept.¶
This document defines a YANG [RFC7950] data model for [RFC9543] Network Slice Service. The Network Slice Service Model (NSSM) can be used in the Network Slice Service Interface exposed by a provider to its customers (including for provider's internal use) in order to manage (e.g., subscribe, delete, or change) Network Slice Services. The agreed service will then trigger the appropriate Network Slice operation, such as instantiating, modifying, or deleting a Network Slice.¶
The NSSM focuses on the requirements of a Network Slice Service from the point of view of the customer, not how it is implemented within a provider network. As discussed in [RFC9543], the mapping between a Network Slice Service and its realization is implementation and deployment specific.¶
The NSSM is classified as a customer service model (Section 2 of [RFC8309]).¶
The NSSM conforms to the Network Management Datastore Architecture (NMDA) [RFC8342].¶
Editorial Note: (To be removed by RFC Editor)¶
This document contains several placeholder values that need to be replaced with finalized values at the time of publication. Please apply the following replacements:¶
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP14, [RFC2119], [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
The following terms are defined in [RFC6241] and are used in this specification:¶
This document makes use of the terms defined in [RFC7950].¶
The tree diagrams used in this document follow the notation defined in [RFC8340].¶
This document also makes use of the terms defined in [RFC9543]:¶
Connectivity Construct: See Sections 3.2 and 4.2.1 of [RFC9543].¶
Customer Higher-level Operation System: See Section 6.3.1 of [RFC9543].¶
Service Demarcation Point (SDP): See Sections 3.2 and 5.2 [RFC9543].¶
In addition, this document defines the following term:¶
Connection Group: Refers to one or more connectivity constructs that are grouped for administrative purposes, such as the following:¶
Combine multiple connectivity constructs to support a set of well-known connectivity service types, such as bidirectional unicast service, multipoint-to-point (MP2P) service, or hub-and-spoke service.¶
Assign the same Service Level Objectives (SLOs)/Service Level Expectations (SLEs) policies to multiple connectivity constructs unless the SLOs/SLEs policy is explicitly overridden at the individual connectivity construct level.¶
Share specific SLO limits within multiple connectivity constructs.¶
The following acronyms are used in the document:¶
As defined in Section 3.2 of [RFC9543], a Network Slice Service is specified in terms of a set of Service Demarcation Points (SDPs), a set of one or more connectivity constructs between subsets of these SDPs, and a set of Service Level Objectives (SLOs) and Service Level Expectations (SLEs) for each SDP sending to each connectivity construct. A communication type (point-to-point (P2P), point-to-multipoint (P2MP), or any-to-any (A2A)) is specified for each connectivity construct.¶
The SDPs serve as the Network Slice Service ingress/egress points. An SDP is identified by a unique identifier in the context of a Network Slice Service.¶
Examples of Network Slice Services that contain only one connectivity construct are shown in Figure 1.¶
An example of Network Slice Services that contains multiple connectivity constructs is shown in Figure 2.¶
As shown in Figure 2, the Network Slice Service 4 contains two P2P connectivity constructs between the set of SDPs. The Network Slice Service 5 is a bidirectional unicast service between SDP14 and SDP15 that consists of two unidirectional P2P connectivity constructs.¶
The NSSM can be used by a provider to expose its Network Slice Services, and by a customer to manage its Network Slices Services (e.g., request, delete, or modify). The details about how service requests are handled by a provider (specifically, a controller), including which network operations are triggered, are internal to the provider. The details of the Network Slices realization are hidden from customers.¶
The Network Slices are applicable to use cases, such as (but not limited to) 5G, network wholesale services, network infrastructure sharing among operators, Network Function Virtualization (NFV) connectivity, and Data Center interconnect. [I-D.ietf-teas-ietf-network-slice-use-cases] provides a more detailed description of the use cases for Network Slices.¶
A Network Slice Controller (NSC) is an entity that exposes the Network Slice Service Interface to customers to manage Network Slice Services. Typically, an NSC receives requests from its customer-facing interface (e.g., from a management system). During service creation, this interface can convey data objects that the Network Slice Service customer provides, describing the needed Network Slices Service in terms of SDPs, the associated connectivity constructs, and the service objectives that the customer wishes to be fulfilled. Depending on whether the requirements and authorization checks are successfully met, these service requirements are then translated into technology-specific actions that are implemented in the underlying network(s) using a network-facing interface. The details of how the Network Slices are put into effect are out of scope for this document.¶
As shown in Figure 3, the NSSM is used by the customer's higher level operation system to communicate with an NSC for life cycle management of Network Slice Services including both enablement and monitoring. For example, in the 5G End-to-end network slicing use case, the 5G network slice orchestrator acts as the higher layer system to manage the Network Slice Services. The interface is used to support dynamic Network Slice management to facilitate end-to-end 5G network slice services.¶
Note: The NSSM can be used recursively (hierarchical mode), i.e., an NSS can map to child NSSes. As described in Section A.5 of [RFC9543], the Network Slice Service can support a recursive composite architecture that allows one layer of Network Slice Services to be used by other layers.¶
The NSSM, "ietf-network-slice-service", includes two main data nodes: "slo-sle-templates" and "slice-service" ( Figure 4).¶
The "slo-sle-templates" container is used by an NSC to maintain a set of common Network Slice SLO and SLE templates that apply to one or several Network Slice Services. Refer to Section 5.1 for further details on the properties of NSS templates.¶
The "slice-service" list includes the set of Network Slice Services that are maintained by a provider for a given customer. "slice-service" is the data structure that abstracts the Network Slice Service. Under the "slice-service", the "sdp" list is used to abstract the SDPs. The "connection-group" is used to abstract connectivity constructs between SDPs. Refer to Section 5.2 for further details on the properties of an NSS.¶
The "slo-sle-templates" container (Figure 5) is used by a Network Slice Service provider to define and maintain a set of common Network Slice Service templates that apply to one or several Network Slice Services. The templates are assumed to be known to both the customers and the provider. The exact definition of the templates is deployment specific.¶
The NSSM provides the SLO and SLE templates identifiers and templates, and the common attributes of the templates are defined in Section 5.1 of [RFC9543]. Standard templates provided by the provider as well as custom "service-slo-sle-policy" are defined, since there are many attributes defined and some attributes could vary with service requirements, e.g., bandwidth or latency. A customer may choose either a standard template provided by the provider or a customized "service-slo-sle-policy".¶
Standard template: The exact definition of the templates is deployment specific. The attributes configuration of a standard template is optional. When specifying attributes, a standard template can use "template-ref" to inherit some attributes of a predefined standard template and override the specific attributes.¶
Custom "service-slo-sle-policy": More description is provided in Section 5.2.3.¶
Figure 6 shows an example where two standard network slice templates are retrieved by the customers.¶
Figure 6 uses folding as defined in [RFC8792] for long lines.¶
The "slice-service" is the data structure that abstracts a Network Slice Service. Each "slice-service" is uniquely identified within an NSC by "id".¶
A Network Slice Service has the following main data nodes:¶
"description": Provides a textual description of a Network Slice Service.¶
"service-tags": Indicates a management tag (e.g., "customer name" ) that is used to correlate the operational information of Customer Higher-level Operation System and Network Slices. It might be used by a Network Slice Service provider to provide additional information to an NSC during the operation of the Network Slices. For example, adding tags with "customer name" when multiple actual customers use the same Network Slice Service. Another use case for "service-tag" might be for a provider to provide additional attributes to an NSC which might be used during the realization of Network Slice Services such as type of services (e.g., use Layer 2 or Layer 3 technology for the realization). These additional attributes can also be used by an NSC for various purposes such as monitoring and assurance of the Network Slice Services where the NSC can issue notifications to the customer system. All these attributes are optional.¶
"slo-sle-policy": Defines SLO and SLE policies for the "slice-service". More details are provided in Section 5.2.3.¶
"compute-only": Is used to check the feasibility of the service before instantiating a Network Slice Service in a network. More details are provided in Section 5.2.6.¶
"status": Indicates both the operational and administrative status of a Network Slice Service. Mismatches between the admin/oper status can be used as an indicator to detect Network Slice Service anomalies.¶
"sdps": Represents a set of SDPs that are involved in the Network Slice Service. More details are provided in Section 5.2.1.¶
"connection-groups": Abstracts the connections to the set of SDPs of the Network Slice Service.¶
"custom-topology": Represents custom topology constraints for the Network Slice Service. More details are provided in Section 5.2.5¶
A Network Slice Service involves two or more SDPs. A Network Slice Service can be modified by adding new "sdp"s.¶
Section 5.2 of [RFC9543] describes four possible ways in which an SDP may be placed:¶
Although there are four options, they can be categorized into two categories: CE-based or PE-based.¶
In the four options, the Attachment Circuit (AC) may be part of the Network Slice Service or may be external to it. Based on the AC definition in Section 5.2 of [RFC9543], the customer and provider may agree on a per {Network Slice Service, connectivity construct, and SLOs/SLEs} basis to police or shape traffic on the AC in both the ingress (CE to PE) direction and egress (PE to CE) direction, which ensures that the traffic is within the capacity profile that is agreed in a Network Slice Service. Excess traffic is dropped by default, unless specific out-of-profile policies are agreed between the customer and the provider.¶
To abstract the SDP options and SLOs/SLEs profiles, an SDP has the following characteristics:¶
"id": Uniquely identifies the SDP within an NSC. The identifier is a string that allows any encoding for the local administration of the Network Slice Service.¶
"geo-location": Indicates SDP location information, which helps the NSC to identify an SDP.¶
"node-id": A reference to the node that hosts the SDP, which helps the NSC to identify an SDP. This document assumes that higher-level systems can obtain the node information, PE and CE, prior to the service requests. For example, Service Attachment Points (SAPs) [RFC9408] can obtain PE-related node information. The implementation details are left to the NSC provider.¶
"sdp-ip-address": The SDP IP address, which helps the NSC to identify an SDP.¶
"tp-ref": A reference to a Termination Point (TP) in the custom topology defined in Section 5.2.5.¶
"service-match-criteria": Defines matching policies for the Network Slice Service traffic to apply on a given SDP.¶
"incoming-qos-policy" and "outgoing-qos-policy": Sets the incoming and outgoing QoS policies to apply on a given SDP, including QoS policy and specific ingress and egress traffic limits to ensure access security. When applied in the incoming direction, the policy is applicable to the traffic that passes through the AC from the customer network or from another provider's network to the Network Slice. When applied in the outgoing direction, the policy is applied to the traffic from the Network Slice towards the customer network or towards another provider's network. If an SDP has multiple ACs, the "rate-limits" of "attachment-circuit" can be set to an AC specific value, but the rate cannot exceed the "rate-limits" of the SDP. If an SDP only contains a single AC, then the "rate-limits" of "attachment-circuit" is the same with the SDP. The definition of AC refers to Section 5.2 [RFC9543].¶
"sdp-peering": Specifies the peers and peering protocols for an SDP to exchange control-plane information, e.g., Layer 1 signaling protocol or Layer 3 routing protocols, etc. As shown in Figure 8¶
"peer-sap-id": Indicates the references to the remote endpoints of attachment circuits. This information can be used for correlation purposes, such as identifying Service Attachment Points (SAPs) defined in [RFC9408], which defines a model of an abstract view of the provider network topology that contains the points from which the services can be attached.¶
"protocols": Serves as an augmentation target. Appendix A shows an example where BGP and static routing are augmented to the model.¶
"ac-svc-ref": Refers to the ACs that have been created, which is defined in Section 5.2 of [I-D.ietf-opsawg-teas-attachment-circuit]. When both "ac-svc-ref" and the attributes of "attachment-circuits" are defined, the "ac-svc-ref" may take precedence or act as the parent AC depending on the use cases.¶
"ce-mode": A flag node that marks the SDP as CE type.¶
"attachment-circuits": Specifies the list of ACs by which the service traffic is received. This is an optional SDP attribute. When an SDP has multiple ACs and some AC specific attributes are needed, each "attachment-circuit" can specify attributes, such as interface specific IP addresses, service MTU, etc.¶
"status": Enables the control of the administrative status and reporting of the operational status of the SDP. These status values can be used as indicators to detect SDP anomalies.¶
"sdp-monitoring": Provides SDP bandwidth statistics.¶
Depending on the requirements of different cases, "service-match-criteria" can be used for the following purposes:¶
Specify the AC type: physical or logical connection.¶
Distinguish the SDP traffic if the SDP is located in the CE or PE.¶
Distinguish the traffic of different connection groups (CGs) or connectivity constructs (CCs) when multiple CGs/CCs of different SLO/SLE may be set up between the same pair of SDPs, as illustrated in Figure 9. Traffic needs to be explicitly mapped into the Network Slice's specific connectivity construct. The policies, "service-match-criteria", are based on the values in which combination of Layer 2 and Layer 3 header and payload fields within a packet to identify to which {Network Slice Service, connectivity construct, and SLOs/SLEs} that packet is assigned. For example, VLAN ([IEEE802.1Q]), C-VLAN/S-VLAN ([IEEE802.1ad]), or IP addresses.¶
Define specific out-of-profile policies: The customer may choose to use an explicit "service-match-criteria" to map any SDP's traffic or a subset of the SDP's traffic to a specific connection group or connectivity construct. If a subset of traffic is matched (e.g., "dscp" and/or IP addresses) and mapped to a connectivity construct, the customer may choose to add a subsequent "match-any" to explicitly map the remaining SDP traffic to a separate connectivity construct. If the customer chooses to implicitly map remaining traffic and if there are no additional connectivity constructs where the "sdp/id" source is specified, then that traffic will be dropped.¶
If an SDP is placed at the port of a CE or PE, and there is only one single connectivity construct with a source at the SDP, traffic can be implicitly mapped to this connectivity construct since the AC information (e.g., VLAN tag) can be used to unambiguously identify the traffic and the SDP is the only source of the connectivity-construct. Appendix B.1 shows an example of both the implicit and explicit approaches. While explicit matching is optional in some use cases, it provides a more clear and readable implementation, but the choice is left to the operator.¶
Figure 10 and Figure 11 provide examples that illustrate the use of SDP options. How an NSC realizes the mapping is out of scope for this document.¶
SDPs at customer-facing ports on the PEs: As shown in Figure 10, a customer of the Network Slice Service would like to connect two SDPs to satisfy specific service needs, e.g., network wholesale services. In this case, the Network Slice SDPs are mapped to customer-facing ports of PE nodes. The NSC uses "node-id" (PE device ID), "attachment-circuits", or "ac-svc-ref" to map SDPs to the customer-facing ports on the PEs.¶
SDPs within CEs: As shown in Figure 11, a customer of the Network Slice Service would like to connect two SDPs to provide connectivity between transport portion of 5G RAN to 5G Core network functions. In this scenario, the NSC uses "node-id" (CE device ID), "geo-location", "sdp-ip-address" (IP address of SDP for management), "service-match-criteria" (VLAN tag), "attachment-circuits" or "ac-svc-ref" (CE ACs) to map SDPs to the CE. The NSC can use these CE parameters (and optionally other information to uniquely identify a CE within an NSC, such as "peer-sap-id" [RFC9408]) to retrieve the corresponding PE device, interface and AC mapping details to complete the Network Slice Service provisioning.¶
Section 4.2.1 of [RFC9543] defines the basic connectivity construct (CC) and CC types of a Network Slice Service, including P2P, P2MP, and A2A.¶
A Network Slice Service involves one or more connectivity constructs. The "connection-groups" container is used to abstract CC, CC groups, and their SLO-SLE policies and the structure is shown in Figure 12.¶
The description of the "connection-groups" data nodes is as follows:¶
"connection-group": Represents a group of CCs. In the case of hub and spoke connectivity of the Slice Service, it may be inefficient when there are a large number of SDPs with multiple CCs. As illustrated in Appendix B.3, "connectivity-type" of "ietf-vpn-common:hub-spoke" and "connection-group-sdp-role" of "ietf-vpn-common:hub-role" or "ietf-vpn-common:spoke-role" can be specified [RFC9181]. Another use is for optimizing "slo-sle-policy" configurations, treating CCs with the same SLO and SLE characteristics as a connection group such that the connectivity construct can inherit the SLO/SLE from the group if not explicitly defined.¶
"connectivity-type": Indicates the type of the connection group, extending "vpn-common:vpn-topology" specified [RFC9181] with the NS connectivity type, e.g., P2P and P2MP.¶
"connectivity-construct": Represents single connectivity construct, and "slo-sle-policy" under it represents the per-connectivity construct SLO and SLE requirements.¶
"slo-sle-policy" and "service-slo-sle-policy-override": The details of "slo-sle-policy" are defined in Section 5.2.3. In addition to "slo-sle-policy" nodes of "connection-group" and "connectivity-construct", a leaf node "service-slo-sle-policy-override" is provided for scenarios with complex SLO-SLE requirements to completely override all or part of a "slo-sle-policy" with new values. For example, if a particular "connection-group" or a "connectivity-construct" has a unique bandwidth or latency setting, that are different from those defined in the Slice Service, a new set of SLOs/SLEs with full or partial override can be applied. In the case of partial override, only the newly specified parameters are replaced from the original template, while maintaining on pre-existing parameters not specified. While a full override removes all pre-existing parameters, and in essence starts a new set of SLOs/SLEs which are specified.¶
As defined in Section 5 of [RFC9543], the SLO and SLE policy of the Network Slice Services define some common attributes.¶
"slo-sle-policy" is used to represent these SLO and SLE policies. During the creation of a Network Slice Service, the policy can be specified either by a standard SLO and SLE template or a customized SLO and SLE policy.¶
The policy can apply to per-network Slice Service, per-connection group "connection group", or per-connectivity construct "connectivity-construct". Since there are multiple mechanisms for assigning a policy to a single connectivity construct, an override precedence order among them is as follows:¶
Connectivity-construct at an individual sending SDP¶
Connectivity-construct¶
Connection-group¶
Slice-level¶
That is, the policy assigned through the sending SDP has highest precedence, and the policy assigned by the slice level has lowest precedence. Therefore, the policy assigned through the sending SDP takes precedence over the policy assigned through the connection-construct entry. Appendix B.5 gives an example of the preceding policy, which shows a Slice Service having an A2A connectivity as default and several specific SLO connections.¶
The SLO attributes include performance metric attributes, availability, and MTU. The SLO structure is shown in Figure 13. Figure 26 shows an example "slice5" with a custom network slice "slo-policy".¶
The list "metric-bound" supports the generic performance metric variations and the combinations and each "metric-bound" could specify a particular "metric-type". "metric-type" is defined with YANG identity and supports the following options:¶
"availability": Specifies service availability defined as the ratio of uptime to the sum of uptime and downtime, where uptime is the time the Network Slice is available in accordance with the SLOs associated with it.¶
"mtu": Specifies the maximum length of Layer 2 data packets of the Slice Service, in bytes. If the customer sends packets that are longer than the requested service MTU, the network may discard them (or for IPv4, fragment them). This service MTU takes precedence over the MTUs of all ACs. The value needs to be smaller than or equal to the minimum MTU value of all ACs in the SDPs.¶
As shown in Figure 14, the following SLEs data nodes are defined.¶
The operation and performance status of Network Slice Services is also a key component of the NSSM. The model provides SLO monitoring information with the following granularity:¶
Per SDP: The incoming and outgoing bandwidths of an SDP are specified in "sdp-monitoring" under the "sdp".¶
Per connectivity construct: The delay, delay variation, and packet loss status are specified in "connectivity-construct-monitoring" under the "connectivity-construct".¶
Per connection group: The delay, delay variation, and packet loss status are specified in "connection-group-monitoring" under the "connection-group".¶
[RFC8639] and [RFC8641] define a subscription mechanism and a push mechanism for YANG datastores. These mechanisms currently allow the user to subscribe to notifications on a per-client basis and specify either periodic or on-demand notifications. By specifying subtree filters or xpath filters to "sdp", "connectivity-construct", or "connection-group", so that only interested contents will be sent. The example in Figure 23 shows the way for a customer to subscribe to the monitoring information for a particular Network Slice Service.¶
Additionally, a customer can use the NSSM to obtain a snapshot of the Network Slice Service performance status through [RFC8040] or [RFC6241] interfaces. For example, retrieve the per-connectivity-construct data by specifying "connectivity-construct" as the filter in the RESTCONF GET request.¶
A Slice Service customer might request for some level of control over the topology or resource constraints. "custom-topology" is defined as an augmentation target that references the context topology. The leaf "network-ref" under this container is used to reference a predefined topology as a customized topology constraint for a Network Slice Service. Section 1 of [RFC8345] defines a general abstract topology concept to accommodate both the provider's resource capability and the customer's preferences. The abstract topology is a topology that contains abstract topological elements (nodes, links, and termination points).¶
This document defines only the minimum attributes of a custom topology, which can be extended based on the implementation requirements.¶
The following nodes are defined for the custom topology:¶
A Network Slice Service customer may request to check the feasibility of a request before instantiating or modifying a Network Slice Service, e.g., network resources such as service access points for service delivery. In such a case, the Network Slice Service is configured in "compute-only" mode to distinguish it from the default behavior.¶
A "compute-only" Network Slice Service is configured as usual with the associated per slice SLOs/SLEs. The NSC computes the feasible connectivity constructs to the configured SLOs/SLEs. This computation does not create the Network Slice or reserve any resources in the provider's network, it simply computes the resulting Network Slice based on the request. The Network Slice "admin-status" and the connection groups or connectivity construct list are used to convey the result. For example, "admin-compute-only" can be used to show the status. Customers can query the "compute-only" connectivity constructs attributes, or can subscribe to be notified when the connectivity constructs status change.¶
The "compute-only" applies only if the data model is used with a protocol that does not natively support such operation, e.g., [RFC8040]. When using NETCONF, <edit-config> operation (Section 7.2 of [RFC6241]), "test-only" of the <test-option> parameter also applies.¶
The "ietf-network-slice-service" module uses types defined in [RFC6991], [RFC8345], [RFC8519], [RFC9179], [RFC9181], [I-D.ietf-opsawg-teas-attachment-circuit], [I-D.ietf-opsawg-teas-common-ac], and [I-D.ietf-teas-rfc8776-update].¶
The YANG module specified in this document defines a schema for data that is designed to be accessed via network management protocols such as NETCONF [RFC6241] or RESTCONF [RFC8040]. The lowest NETCONF layer is the secure transport layer, and the mandatory-to-implement secure transport is Secure Shell (SSH) [RFC6242]. The lowest RESTCONF layer is HTTPS, and the mandatory-to-implement secure transport is TLS [RFC8446].¶
The Network Configuration Access Control Model (NACM) [RFC8341] provides the means to restrict access for particular NETCONF or RESTCONF users to a preconfigured subset of all available NETCONF or RESTCONF protocol operations and content.¶
There are a number of data nodes defined in these YANG modules that are writable/creatable/deletable (i.e., config true, which is the default). These data nodes may be considered sensitive or vulnerable in some network environments. Write operations (e.g., edit-config) and delete operations to these data nodes without proper protection or authentication can have a negative effect on network operations. These are the subtrees and data nodes and their sensitivity/ vulnerability in the "ietf-network-slice-service" module:¶
* /ietf-network-slice-service/network-slice-services/slo-sle-templates¶
This subtree specifies the Network Slice Service SLO templates and SLE templates. Modifying the configuration in the subtree will change the related Network Slice Service configuration in the future. By making such modifications, a malicious attacker may degrade the Slice Service functions configured at a certain time in the future.¶
* /ietf-network-slice-service/network-slice-services/slice-service¶
The entries in the list above include the whole network configurations corresponding with the Network Slice Service which the higher management system requests, and indirectly create or modify the PE or P device configurations. Unexpected changes to these entries could lead to service disruption and/or network misbehavior.¶
Some of the readable data nodes in these YANG modules may be considered sensitive or vulnerable in some network environments. It is thus important to control read access (e.g., via get, get-config, or notification) to these data nodes. These are the subtrees and data nodes and their sensitivity/vulnerability in the "ietf-network-slice-service" module:¶
* /ietf-network-slice-service/network-slice-services/slo-sle-templates¶
Unauthorized access to the subtree may disclose the SLO and SLE templates of the Network Slice Service.¶
* /ietf-network-slice-service/network-slice-services/slice-service¶
Unauthorized access to the subtree may disclose the operation status information of the Network Slice Service.¶
* /ietf-network-slice-service/network-slice-services/slice-service/service-tags¶
Unauthorized access to the subtree may disclose privacy data such as customer names of the Network Slice Service.¶
This document requests to register the following URI in the IETF XML registry [RFC3688]:¶
URI: urn:ietf:params:xml:ns:yang:ietf-network-slice-service Registrant Contact: The IESG. XML: N/A, the requested URI is an XML namespace.¶
This document requests to register the following YANG module in the YANG Module Names registry [RFC6020].¶
Name: ietf-network-slice-service Namespace: urn:ietf:params:xml:ns:yang:ietf-network-slice-service Prefix: ietf-nss Maintained by IANA? N Reference: RFC AAAA¶
The authors wish to thank Mohamed Boucadair, Kenichi Ogaki, Sergio Belotti, Qin Wu, Yao Zhao, Susan Hares, Eric Grey, Daniele Ceccarelli, Ryan Hoffman, Adrian Farrel, Aihua Guo, Italo Busi, and many others for their helpful comments and suggestions.¶
Thanks to Ladislav Lhotka for the YANG Doctors review.¶
The following authors contributed significantly to this document:¶
Luis M. Contreras Telefonica Spain Email: [email protected] Liuyan Han China Mobile Email: [email protected]¶
The NSSM defines the minimum attributes of Slice Services. In some scenarios, further extension, e.g., the definition of AC technology specific attributes and the "isolation" SLE characteristics are required.¶
For AC technology specific attributes, if the customer and provider need to agree, through configuration, on the technology parameter values, such as the protocol types and protocol parameters between the PE and the CE. The following shows an example where BGP and static routing are augmented to the Network Slice Service model. The protocol types and definitions can reference [I-D.ietf-opsawg-teas-common-ac].¶
In some scenarios, for example, when multiple Slice Services share one or more ACs, independent AC services, defined in [I-D.ietf-opsawg-teas-attachment-circuit], can be used.¶
For "isolation" SLE characteristics, the following identities can be defined.¶
Figure 19 shows an example of two Network Slice Service instances where the SDPs are the customer-facing ports on the PE:¶
Network Slice 1 on SDP1, SDP11a, and SDP4, with an A2A connectivity type. This is an L3 Slice Service that uses the uniform low latency "slo-sle-template" policy between all SDPs. These SDPs will also have AC eBGP peering sessions with unmanaged CE elements (not shown) using an AC augmentation model such as the one shown above.¶
Network Slice 2 on SDP2, SDP11b, with A2A connectivity type. This is an L3 Slice Service that uses the uniform high bandwidth "slo-sle-template" policy between all SDPs.¶
Slice 1 uses the explicit match approach for mapping SDP traffic to a "connectivity-construct", while slice 2 uses the implicit approach. Both approaches are supported. The "slo-sle-templates" templates are known to the customer.¶
Note: These two slices both use service-tags of "L3". This "service-tag" is operator defined and has no specific meaning in the YANG model other than to give a hint to the NSC on the service expectation being L3 forwarding. In other examples, we may choose to eliminate it. The usage of this tag is arbitrary and depends on the needs of the operator and the NSC.¶
Figure 20 shows an example YANG JSON data for the body of the Network Slice Service instances request.¶
Figure 21 shows an example of two Network Slice Service instances where the SDPs are the customer-facing ports on the PE:¶
Network Slice 3 on SDP5 and SDP7a with P2P connectivity type. This is an L2 Slice Service that uses the uniform low-latency "slo-sle-template" policies between the SDPs. A connectivity-group level slo-policy has been applied with a delay-based metric bound of 10ms which will apply to both connectivity-constructs.¶
Network Slice 4 on SDP6 and SDP7b, with P2P connectivity type. This is an L2 Slice Service that uses the high bandwidth "slo-sle-template" policies between the SDPs. Traffic from SDP6 and SDP7b is requesting a bandwidth of 1000Mbps, while in the reverse direction from SDP7b to SDP6, 5000Mbps is being requested.¶
Slice 3 uses the explicit match approach for mapping SDP traffic to a "connectivity-group", while slice 2 uses the implicit approach. Both approaches are supported.¶
Note: These two slices both use service-tags of "L2". This "service-tag" is operator defined and has no specific meaning in the YANG model other than to give a hint to the NSC on the service expectation being L2 forwarding. Other examples we may choose to eliminate it. The usage of this tag is arbitrary and depends on the needs of the operator and the NSC.¶
Figure 22 shows an example YANG JSON data for the body of the Network Slice Service instances request.¶
The example shown in Figure 23 illustrates how a customer subscribes to the monitoring information of "slice3". The customer is interested in the operational and performance status of SDPs and connectivity constructs.¶
The example Figure 24 shows a snapshot of YANG JSON data for the body of operational and performance status of the Network Slice Service "slice3".¶
Figure 25 shows an example of one Network Slice Service instance where the SDPs are the customer-facing ports on the PE:¶
Figure 26 shows an example YANG JSON data for the body of the hub-spoke Network Slice Service instances request.¶
Figure 27 shows an example of a Network slice instance where the SDPs are the customer-facing ports on the PE:¶
Figure 28 shows an example YANG JSON data for the body of the Network Slice Service instances request.¶
Figure 29 shows an example of "service-match-criteria" with a combination of both DSCP and IP Address for the Slice Service traffic matching.¶
Figure 30 shows an example of a Network slice instance "slice-7" with four SDPs: SDP1, SDP2, SDP3 and SDP4 with A2A connectivity type. All SDPs are designated as customer-facing ports on the PE.¶
The service is realized using a single A2A connectivity construct, and a low-bandwidth "slo-sle-template" policy applied to SDP4 and SDP3, while a high-bandwidth "slo-sle-template" policy applied to SDP1 and SDP2. Notice that the "slo-sle-templates" at the connectivity construct level takes precedence over the one specified at the group level.¶
Figure 31 shows an example YANG JSON data for the body of the Network Slice Service instances request.¶
Figure 32 shows an example of one Network slice instance where the SDPs are located at the PE-facing ports on the CE:¶
Network Slice 8 with SDP31 on CE Device1, SDP33 (with two ACs) on Device 3 and SDP34 on Device 4, with an A2A connectivity type. This is an L3 Slice Service that uses the uniform low-latency slo-sle-template policy between all SDPs.¶
This example also introduces the optional attribute of "sdp-ip". In this example, it could be a loopback on the device. How this "sdp-ip" is used by the NSC is out-of-scope here, but an example could be it is the management interface of the device. The SDP and AC details are from the perspective of the CE in this example. How the CE ACs are mapped to the PE ACs is up to the NSC implementation and out-of-scope in this example.¶
SDP31 ac-id=ac31, node-id=Device1, interface: GigabitEthernet0 vlan 100¶
SDP33 ac-id=ac33a, node-id=Device3, interface: GigabitEthernet0 vlan 101¶
SDP33 ac-id=ac33b, node-id=Device3, interface: GigabitEthernet1 vlan 201¶
SDP34 ac-id=ac34, node-id=Device4, interface: GigabitEthernet3 vlan 100¶
Figure 33 shows an example YANG JSON data for the body of the Network Slice Service instances request.¶
Figure 34 shows an example of one Network slice instance where the SDPs are located at the PE-facing ports on the CE.¶
In this example, it is assumed that the NSC already has circuit binding details between the CE and PE which were previously assigned (method is out-of-scope) or the NSC has mechanisms to determine this mapping. While the NSC capabilities are out-of-scope of this document, the NSC may use the CE device name, "sdp-id", "sdp-ip", "ac-id" or the "peer-sap-id" to complete this AC circuit binding.¶
We are introducing the "peer-sap-id" in this example, which in this case, is an operator provided identifier that the slice requester can use for the NSC to identify the service attachment point (saps) in an abstracted way. How the NSC uses the "peer-sap-id" is out of scope of this document, but a possible implementation would be that the NSC was previously provisioned with a "peer-sap-id" to PE device/interface/VLAN mapping table. Alternatively, the NSC can request this mapping from an external database.¶
Network Slice 9 with SDP31 on CPE Device1, SDP33 (with two ACs) on Device 3 and SDP34 on Device 4, with an A2A connectivity type. This is an L3 Slice Service that uses the uniform low-latency slo-sle-template policy between all SDPs.¶
SDP31 ac-id=ac31, node-id=Device1, peer-sap-id= foo.com-circuitID-12345¶
SDP33 ac-id=ac33a, node-id=Device3, peer-sap-id=foo.com-circuitID-67890¶
SDP33 ac-id=ac33b, node-id=Device3, peer-sap-id=foo.com-circuitID-54321ABC¶
SDP34 ac-id=ac34, node-id=Device4, peer-sap-id=foo.com-circuitID-9876¶
Figure 35 shows an example YANG JSON data for the body of the Network Slice Service instances request.¶
module: ietf-network-slice-service +--rw network-slice-services +--rw slo-sle-templates | +--rw slo-sle-template* [id] | +--rw id string | +--rw description? string | +--rw template-ref? slice-template-ref | +--rw slo-policy | | +--rw metric-bound* [metric-type] | | | +--rw metric-type identityref | | | +--rw metric-unit string | | | +--rw value-description? string | | | +--rw percentile-value? percentile | | | +--rw bound? uint64 | | +--rw availability? identityref | | +--rw mtu? uint32 | +--rw sle-policy | +--rw security* identityref | +--rw isolation* identityref | +--rw max-occupancy-level? uint8 | +--rw path-constraints | +--rw service-functions | +--rw diversity | +--rw diversity-type? | te-types:te-path-disjointness +--rw slice-service* [id] +--rw id string +--rw description? string +--rw service-tags | +--rw tag-type* [tag-type] | +--rw tag-type identityref | +--rw value* string +--rw (slo-sle-policy)? | +--:(standard) | | +--rw slo-sle-template? slice-template-ref | +--:(custom) | +--rw service-slo-sle-policy | +--rw description? string | +--rw slo-policy | | +--rw metric-bound* [metric-type] | | | +--rw metric-type identityref | | | +--rw metric-unit string | | | +--rw value-description? string | | | +--rw percentile-value? percentile | | | +--rw bound? uint64 | | +--rw availability? identityref | | +--rw mtu? uint32 | +--rw sle-policy | +--rw security* identityref | +--rw isolation* identityref | +--rw max-occupancy-level? uint8 | +--rw path-constraints | +--rw service-functions | +--rw diversity | +--rw diversity-type? | te-types:te-path-disjointness +--rw compute-only? empty +--rw status | +--rw admin-status | | +--rw status? identityref | | +--ro last-change? yang:date-and-time | +--ro oper-status | +--ro status? identityref | +--ro last-change? yang:date-and-time +--rw sdps | +--rw sdp* [id] | +--rw id string | +--rw description? string | +--rw geo-location | | +--rw reference-frame | | | +--rw alternate-system? string | | | | {alternate-systems}? | | | +--rw astronomical-body? string | | | +--rw geodetic-system | | | +--rw geodetic-datum? string | | | +--rw coord-accuracy? decimal64 | | | +--rw height-accuracy? decimal64 | | +--rw (location)? | | | +--:(ellipsoid) | | | | +--rw latitude? decimal64 | | | | +--rw longitude? decimal64 | | | | +--rw height? decimal64 | | | +--:(cartesian) | | | +--rw x? decimal64 | | | +--rw y? decimal64 | | | +--rw z? decimal64 | | +--rw velocity | | | +--rw v-north? decimal64 | | | +--rw v-east? decimal64 | | | +--rw v-up? decimal64 | | +--rw timestamp? yang:date-and-time | | +--rw valid-until? yang:date-and-time | +--rw node-id? string | +--rw sdp-ip-address* inet:ip-address | +--rw tp-ref? leafref | +--rw service-match-criteria | | +--rw match-criterion* [index] | | +--rw index | | | uint32 | | +--rw match-type* [type] | | | +--rw type identityref | | | +--rw value* string | | +--rw target-connection-group-id leafref | | +--rw connection-group-sdp-role? | | | identityref | | +--rw target-connectivity-construct-id? leafref | +--rw incoming-qos-policy | | +--rw qos-policy-name? string | | +--rw rate-limits | | +--rw cir? uint64 | | +--rw cbs? uint64 | | +--rw eir? uint64 | | +--rw ebs? uint64 | | +--rw pir? uint64 | | +--rw pbs? uint64 | | +--rw classes | | +--rw cos* [cos-id] | | +--rw cos-id uint8 | | +--rw cir? uint64 | | +--rw cbs? uint64 | | +--rw eir? uint64 | | +--rw ebs? uint64 | | +--rw pir? uint64 | | +--rw pbs? uint64 | +--rw outgoing-qos-policy | | +--rw qos-policy-name? string | | +--rw rate-limits | | +--rw cir? uint64 | | +--rw cbs? uint64 | | +--rw eir? uint64 | | +--rw ebs? uint64 | | +--rw pir? uint64 | | +--rw pbs? uint64 | | +--rw classes | | +--rw cos* [cos-id] | | +--rw cos-id uint8 | | +--rw cir? uint64 | | +--rw cbs? uint64 | | +--rw eir? uint64 | | +--rw ebs? uint64 | | +--rw pir? uint64 | | +--rw pbs? uint64 | +--rw sdp-peering | | +--rw peer-sap-id* string | | +--rw protocols | +--rw ac-svc-ref* | | ac-svc:attachment-circuit-reference | +--rw ce-mode? boolean | +--rw attachment-circuits | | +--rw attachment-circuit* [id] | | +--rw id string | | +--rw description? string | | +--rw ac-svc-ref? | | | ac-svc:attachment-circuit-reference | | +--rw ac-node-id? string | | +--rw ac-tp-id? string | | +--rw ac-ipv4-address? | | | inet:ipv4-address | | +--rw ac-ipv4-prefix-length? uint8 | | +--rw ac-ipv6-address? | | | inet:ipv6-address | | +--rw ac-ipv6-prefix-length? uint8 | | +--rw mtu? uint32 | | +--rw ac-tags | | | +--rw ac-tag* [tag-type] | | | +--rw tag-type identityref | | | +--rw value* string | | +--rw incoming-qos-policy | | | +--rw qos-policy-name? string | | | +--rw rate-limits | | | +--rw cir? uint64 | | | +--rw cbs? uint64 | | | +--rw eir? uint64 | | | +--rw ebs? uint64 | | | +--rw pir? uint64 | | | +--rw pbs? uint64 | | | +--rw classes | | | +--rw cos* [cos-id] | | | +--rw cos-id uint8 | | | +--rw cir? uint64 | | | +--rw cbs? uint64 | | | +--rw eir? uint64 | | | +--rw ebs? uint64 | | | +--rw pir? uint64 | | | +--rw pbs? uint64 | | +--rw outgoing-qos-policy | | | +--rw qos-policy-name? string | | | +--rw rate-limits | | | +--rw cir? uint64 | | | +--rw cbs? uint64 | | | +--rw eir? uint64 | | | +--rw ebs? uint64 | | | +--rw pir? uint64 | | | +--rw pbs? uint64 | | | +--rw classes | | | +--rw cos* [cos-id] | | | +--rw cos-id uint8 | | | +--rw cir? uint64 | | | +--rw cbs? uint64 | | | +--rw eir? uint64 | | | +--rw ebs? uint64 | | | +--rw pir? uint64 | | | +--rw pbs? uint64 | | +--rw sdp-peering | | | +--rw peer-sap-id? string | | | +--rw protocols | | +--rw status | | +--rw admin-status | | | +--rw status? identityref | | | +--ro last-change? yang:date-and-time | | +--ro oper-status | | +--ro status? identityref | | +--ro last-change? yang:date-and-time | +--rw status | | +--rw admin-status | | | +--rw status? identityref | | | +--ro last-change? yang:date-and-time | | +--ro oper-status | | +--ro status? identityref | | +--ro last-change? yang:date-and-time | +--ro sdp-monitoring | +--ro incoming-bw-value? yang:gauge64 | +--ro incoming-bw-percent? percentage | +--ro outgoing-bw-value? yang:gauge64 | +--ro outgoing-bw-percent? percentage +--rw connection-groups | +--rw connection-group* [id] | +--rw id string | +--rw connectivity-type? | | identityref | +--rw (slo-sle-policy)? | | +--:(standard) | | | +--rw slo-sle-template? | | | slice-template-ref | | +--:(custom) | | +--rw service-slo-sle-policy | | +--rw description? string | | +--rw slo-policy | | | +--rw metric-bound* [metric-type] | | | | +--rw metric-type | | | | | identityref | | | | +--rw metric-unit string | | | | +--rw value-description? string | | | | +--rw percentile-value? | | | | | percentile | | | | +--rw bound? uint64 | | | +--rw availability? identityref | | | +--rw mtu? uint32 | | +--rw sle-policy | | +--rw security* | | | identityref | | +--rw isolation* | | | identityref | | +--rw max-occupancy-level? uint8 | | +--rw path-constraints | | +--rw service-functions | | +--rw diversity | | +--rw diversity-type? | | te-types:te-path-disjointness | +--rw service-slo-sle-policy-override? | | identityref | +--rw connectivity-construct* [id] | | +--rw id | | | string | | +--rw (type)? | | | +--:(p2p) | | | | +--rw p2p-sender-sdp? | | | | | -> ../../../../sdps/sdp/id | | | | +--rw p2p-receiver-sdp? | | | | -> ../../../../sdps/sdp/id | | | +--:(p2mp) | | | | +--rw p2mp-sender-sdp? | | | | | -> ../../../../sdps/sdp/id | | | | +--rw p2mp-receiver-sdp* | | | | -> ../../../../sdps/sdp/id | | | +--:(a2a) | | | +--rw a2a-sdp* [sdp-id] | | | +--rw sdp-id | | | | -> ../../../../../sdps/sdp/id | | | +--rw (slo-sle-policy)? | | | +--:(standard) | | | | +--rw slo-sle-template? | | | | slice-template-ref | | | +--:(custom) | | | +--rw service-slo-sle-policy | | | +--rw description? string | | | +--rw slo-policy | | | | +--rw metric-bound* | | | | | [metric-type] | | | | | +--rw metric-type | | | | | | identityref | | | | | +--rw metric-unit | | | | | | string | | | | | +--rw value-description? | | | | | | string | | | | | +--rw percentile-value? | | | | | | percentile | | | | | +--rw bound? | | | | | uint64 | | | | +--rw availability? | | | | | identityref | | | | +--rw mtu? | | | | uint32 | | | +--rw sle-policy | | | +--rw security* | | | | identityref | | | +--rw isolation* | | | | identityref | | | +--rw max-occupancy-level? | | | | uint8 | | | +--rw path-constraints | | | +--rw service-functions | | | +--rw diversity | | | +--rw diversity-type? | | | te-types: te-path-disjointness | | +--rw (slo-sle-policy)? | | | +--:(standard) | | | | +--rw slo-sle-template? | | | | slice-template-ref | | | +--:(custom) | | | +--rw service-slo-sle-policy | | | +--rw description? string | | | +--rw slo-policy | | | | +--rw metric-bound* [metric-type] | | | | | +--rw metric-type | | | | | | identityref | | | | | +--rw metric-unit string | | | | | +--rw value-description? string | | | | | +--rw percentile-value? | | | | | | percentile | | | | | +--rw bound? uint64 | | | | +--rw availability? identityref | | | | +--rw mtu? uint32 | | | +--rw sle-policy | | | +--rw security* | | | | identityref | | | +--rw isolation* | | | | identityref | | | +--rw max-occupancy-level? uint8 | | | +--rw path-constraints | | | +--rw service-functions | | | +--rw diversity | | | +--rw diversity-type? | | | te-types: te-path-disjointness | | +--rw service-slo-sle-policy-override? | | | identityref | | +--rw status | | | +--rw admin-status | | | | +--rw status? identityref | | | | +--ro last-change? yang:date-and-time | | | +--ro oper-status | | | +--ro status? identityref | | | +--ro last-change? yang:date-and-time | | +--ro connectivity-construct-monitoring | | +--ro one-way-min-delay? yang:gauge64 | | +--ro one-way-max-delay? yang:gauge64 | | +--ro one-way-delay-variation? yang:gauge64 | | +--ro one-way-packet-loss? decimal64 | | +--ro two-way-min-delay? yang:gauge64 | | +--ro two-way-max-delay? yang:gauge64 | | +--ro two-way-delay-variation? yang:gauge64 | | +--ro two-way-packet-loss? decimal64 | +--ro connection-group-monitoring | +--ro one-way-min-delay? yang:gauge64 | +--ro one-way-max-delay? yang:gauge64 | +--ro one-way-delay-variation? yang:gauge64 | +--ro one-way-packet-loss? decimal64 | +--ro two-way-min-delay? yang:gauge64 | +--ro two-way-max-delay? yang:gauge64 | +--ro two-way-delay-variation? yang:gauge64 | +--ro two-way-packet-loss? decimal64 +--rw custom-topology +--rw network-ref? -> /nw:networks/network/network-id¶
The difference between the ACTN VN model and the Network Slice Service requirements is that the Network Slice Service interface is a technology-agnostic interface, whereas the VN model is bound to the TE Topologies. The realization of the Network Slice does not necessarily require the slice network to support the TE technology.¶
The ACTN VN (Virtual Network) model introduced in[I-D.ietf-teas-actn-vn-yang] is the abstract customer view of the TE network. Its YANG structure includes four components:¶
VN: A Virtual Network (VN) is a network provided by a service provider to a customer for use and two types of VN have been defined. The Type 1 VN can be seen as a set of edge-to-edge abstract links. Each link is an abstraction of the underlying network which can encompass edge points of the customer's network, access links, intra-domain paths, and inter-domain links.¶
AP: An AP is a logical identifier used to identify the access link which is shared between the customer and the IETF scoped Network.¶
VN-AP: A VN-AP is a logical binding between an AP and a given VN.¶
VN-member: A VN-member is an abstract edge-to-edge link between any two APs or VN-APs. Each link is formed as an E2E tunnel across the underlying networks.¶
The Type 1 VN can be used to describe Network Slice Service connection requirements. However, the Network Slice SLOs and Network Slice SDPs are not clearly defined and there's no direct equivalent. For example, the SLO requirement of the VN is defined through the TE Topologies YANG model, but the TE Topologies model is related to a specific implementation technology. Also, VN-AP does not define "service-match-criteria" to specify a specific SDP belonging to an Network Slice Service.¶