Internet-Draft RPASCA October 2024
Moriarty, et al. Expires 24 April 2025 [Page]
Workgroup:
Network Working Group
Internet-Draft:
draft-ietf-rats-posture-assessment-01
Published:
Intended Status:
Standards Track
Expires:
Authors:
K. M. Moriarty
Transforming Information Security LLC
M. Wiseman
Beyond Identity
A.J. Stein
NIST
C. Nelogal
Dell

Remote Posture Assessment for Systems, Containers, and Applications at Scale

Abstract

This document establishes an architectural pattern whereby a remote attestation could be issued for a complete set of benchmarks or controls that are defined and grouped by an external entity, eliminating the need to send over individual attestations for each item within a benchmark or control framework. This document establishes a pattern to list sets of benchmarks and controls within CWT and JWT formats for use as an Entity Attestation Token (EAT). While the discussion below pertains mostly to TPM, other Roots of Trust such as TCG DICE, and non-TCG defined components will also be included.

About This Document

This note is to be removed before publishing as an RFC.

The latest revision of this draft can be found at https://kme.github.io/draft-moriarty-attestationsets/draft-moriarty-rats-posture-assessment.html. Status information for this document may be found at https://datatracker.ietf.org/doc/draft-moriarty-rats-posture-assessment/.

Discussion of this document takes place on the Remote ATtestation ProcedureS (rats) Working Group mailing list (mailto:[email protected]), which is archived at https://mailarchive.ietf.org/arch/browse/rats/. Subscribe at https://www.ietf.org/mailman/listinfo/rats/.

Source for this draft and an issue tracker can be found at https://github.com/kme/draft-moriarty-attestationsets.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on 24 April 2025.

Table of Contents

1. Introduction

Posture assessment has long been desired, but has been difficult to achieve due to complexities of customization requirements at each organization. By using policy and measurement sets that may be offered at various assurance levels, local assessment of evidence can be performed to continuously assess compliance.

For example, the Trusted Computing Group's Trusted Platform Module (TPM) format and assessment method can provide this kind of compliance. This and other methods employ a secured log for transparency on the results of the assessed evidence against expected values.

In order to support continuous monitoring of posture assessment and integrity in an enterprise or large data center, the local assessments and remediation are useful to reduce load on the network and remote resources. This is currently done today in measured boot mechanisms.

It is useful to be able to share these results in order to gain a big picture view of the governance, risk, and compliance posture for a network.

As such, communicating a summary result as evidence tied including a link to supporting logs with a remote attestation defined in an Entity Attestation Token (EAT) profile [I-D.ietf-rats-eat] provides a way to accomplish that goal.

This level of integration, which includes the ability to remediate, makes posture assessment through remote attestation achievable for organizations of all sizes. This is enabled through integration with existing toolsets and systems, built as an intrinsic capability.

The measurement and policy grouping results summarized in an EAT profile may be provided by the vendor or by a neutral third party to enable ease of use and consistent implementations.

The local system or server host performs the assessment of posture and remediation. This provides simpler options to enable posture assessment at selected levels by organizations without the need to have in-house expertise.

The measurement and policy sets may also be customized, but not necessary to achieve posture assessment to predefined options.

This document describes a method to use existing remote attestation formats and protocols. The method described allows for defined profiles of policies, benchmarks, and measurements for specific assurance levels. This provides transparency on posture assessment results summarized with remote attestations.

2. Conventions and Definitions

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

3. Posture Assessment Scenarios

By way of example, the Center for Internet Security (CIS) hosts recommended configuration settings to secure operating systems, applications, and devices in CIS Benchmarks developed with industry experts. Attestations aligned to the CIS Benchmarks or other configuration guide such as a DISA STIG could be used to assert the configuration meets expectations. This has already been done for multiple platforms to demonstrate assurance for firmware according to NIST SP 800-193, Firmware Resiliency Guidelines [FIRMWARE]. In order to scale remote attestation, a single attestation for a set of benchmarks or policies being met with a link to the verification logs from the local assessments, is the evidence that may be sent to the verifier and then the relying party. On traditional servers, assurance to NIST SP 800-193 is provable through attestation from a root of trust (RoT), using the Trusted Computing Group (TCG) Trusted Platform Module (TPM) chip and attestation formats. However, this remains local and one knows the policies and measurements have been met if other functions that rely on the assurance are running.

At boot, policy and measurement expectations are verified against a set of "golden policies" from collected evidence and are verified to meet expected values. Device identity and measurements can also be attested at runtime. The attestations on evidence (e.g. hash of boot element) and verification of attestations are typically contained within a system and are limited to the control plane for management. The policy and measurement sets for comparison are protected to assure the result in the attestation verification process for boot element. Event logs and PCR values may be exposed to provide transparency into the verified attestations. The remote attestation defined in this document provides a summary of a local assessment of posture for managed systems and across various layers (operating system, application, containers) in each of these systems in a managed environment as evidence. The Relying Party uses the verified evidence to under stand posture of interconnected operating systems, applications, and systems that are communicated in summary results.

There is a balance of exposure and evidence needed to assess posture when providing assurance of controls and system state. Currently, if using the TPM, logs and TPM PCR values may be passed to provide assurance of verification of attestation evidence meeting set requirements. Providing the set of evidence as assurance to a policy set can be accomplished with a remote attestation format such as the Entity Attestation Token (EAT) [I-D.ietf-rats-eat] and a RESTful interface such as ROLIE [RFC8322] or RedFish [REDFISH]. Policy definition blocks may be scoped to control measurement sets, where the EAT profile asserts compliance to the policy or measurement block specified and may include claims with the log and PCR value evidence. Measurement and Policy sets, referenced in an EAT profile may be published and maintained by separate entities (e.g. CIS Benchmarks, DISA STIGs). The policy and measurement sets should be maintained separately even if associated with the same benchmark or control set. This avoids the need to transition the verifying entity to a remote system for individual policy and measurements which are performed locally for more immediate remediation as well as other functions.

Examples of measurement and policy sets that could be defined in EAT profiles include, but are not limited to:

Scale, ease of use, full automation, and consistency for customer consumption of a remote attestation function or service are essential toward the goal of consistently securing systems against known threats and vulnerabilities. Mitigations may be baked into policy. Claim sets of measurements and policy verified to meet or not meet Endorsed values [I-D.ietf-rats-eat] are conveyed in an Entity Attestation Token made available to a RESTful interface in aggregate for the systems managed as evidence for the remote attestation. The Measurement or Policy Set may be registered in the IANA registry created in this document (Section 9), detailing the specific configuration policies and measurements required to adhere or prove compliance to the associated document to enable interoperability. Levels (e.g. high, medium, low, 1, 2, 3) or vendor specific instances of the policy defined in code required to verify the policy and measurements would be registered using a name for the policy set, that would also be used in the reporting EAT that includes the MPS along with other artifacts to prove compliance.

4. Policy and Measurement Set Definitions

This document defines EAT claims in the JWT [RFC7519] and CWT [RFC8392] registries to provide attestation to a set of verified claims within a defined grouping. The trustworthiness will be conveyed on original verified evidence as well as the attestation on the grouping. The claims provide the additional information needed for an EAT to convey compliance to a defined policy or measurement set to a system or application collecting evidence on policy and measurement assurance, for instance a Governance, Risk, and Compliance (GRC) system.

Table 1
Claim Long Name Claim Description Format
mps Measurement or Policy Set Name for the MPS  
lem Log Evidence of MPS Log File or URI  
pcr TPM PCR Values URI  
fma Format of MPS Attestations Format of included attestations  
hsh Hash Value/Message Digest Hash value of claim-set  

5. Supportability and Re-Attestation

The remote attestation framework shall include provisions for a Verifier Owner and Relying Party Owner to declare an Appraisal Policy for Attestation Results and Evidence that allows for modification of the Target Environment (e.g. a product, system, or service).

Over its lifecycle, the Target Environment may experience modification due to: maintenance, failures, upgrades, expansion, moves, etc..

The Relying Party Owner managing the Target Environment (e.g. customer using the product) can chose to:

In the case of Re-Attestation:

6. Configuration Sets

In some cases, it may be difficult to attest to configuration settings for the initial or subsequent attestation and verification processes. The use of an expected hash value for configuration settings can be used to compare the attested configuration set. In this case, the creator of the attestation verification measurements would define a set of values for which a message digest would be created and then signed by the attestor. The expected measurements would include the expected hash value for comparison.

The configuration set could be the full attestation set to a Benchmark or a defined subset. These configuration sets can be registered for general use to reduce the need to replicate the policy and measurement assessments by others aiming to assure at the same level for a benchmark or hardening guide.

This document creates an IANA registry for this purpose, creating consistency between automated policy and measurement set levels and the systems used to collect and report aggregate views for an organization across systems and applications, such as a GRC platform.

7. Remediation

If policy and configuration settings or measurements attested do not meet expected values, remediation is desireable. Automated remediation performed with alignment to zero trust architecture principles would require that the remediation be performed prior to any relying component executing. The relying component would verify before continuing in a zero trust architecture.

Ideally, remediation would occur on system as part of the process to attest to a set of attestations, similar to how attestation is performed for firmware in the boot process. If automated remediation is not possible, an alert should be generated to allow for notification of the variance from expected values.

8. Security Considerations

This document establishes a pattern to list sets of benchmarks and controls within CWT and JWT formats. The contents of the benchmarks and controls are out of scope for this document. This establishes an architectural pattern whereby a remote attestation could be issued for a complete set of benchmarks or controls as defined and grouped by external entities, preventing the need to send over individual attestations for each item within a benchmark or control framework. This document does not add security consideration over what has been described in the EAT, JWT, or CWT specifications.

9. IANA Considerations

Draft section - authors know more work is needed to properly define the registry and claims. This section is here now to assist in understanding the concepts.

This document requests the creation of a Measurement and Policy Set (MPS) registry. The MPS registry will contain the names of the Benchmarks, Policy sets, DISA STIGS, controls, or other groupings as a policy and measurement set that MAY correlate to standards documents containing assurance guidelines, compliance requirements, or other defined claim sets for verification of posture assessment to that MPS. The MPS registry will include the policy definition for specific levels of MPS assurance to enable interoperability between assertions of compliance (or lack thereof) and reporting systems.

Table 2
MPS Name MPS Description File with MPS definition
Ubuntu-CIS-L1 Ubuntu CIS Benchmark, level 1 assurance http:// /Ubuntu-CIS-L1.txt

The MPS name includes versions or level information, allowing for distinct policy or measurement sets and definitions of those sets (including the supporting formats used to write the definitions).

9.1. Additions to the JWT and CWT registries requested

This document requests the following JWT claims per the specification requirement required for the JSON Web Token (JWT) registry defined in RFC7519.

Table 3
Claim Long Name Claim Description
MPS Measurement or Policy Set Name for the MPS
LEM Log Evidence of MPS Log File or URI
PCR TPM PCR Values URI
FMA Format of MPS Attestations Format of included attestations
HSH Hash Value/Message Digest Hash value of claim-set

9.2. MPS (Measurement or Policy Set) Claim

The MPS (Measurement or Policy Set) claim identifies the policy and measurement set being reported. The MPS MAY be registered to the MPS IANA registry. The MPS may be specified to specific levels of assurance to hardening, loosening guides or benchmarks to provide interoperability in reporting. The processing of this claim is generally application specific. The MPS value is a case-sensitive string containing a StringOrURI value. Use of this claim is OPTIONAL.

This document requests the following CWT claims per the specification requirement required for the CBOR Web Token (CWT) registry defined in RFC8392.

Table 4
Claim Long Name Claim Description JWT Claim Name
MPS Measurement or Policy Set Name for the MPS MPS
LEM Log Evidence of MPS Log File or URI LEM
PCR TPM PCR Values URI PCR
FMA Format of MPS Attestations Format of included attestations FMA
HSH Hash Value/Message Digest Hash value of claim-set HSH

10. References

10.1. Normative References

[I-D.ietf-rats-eat]
Lundblade, L., Mandyam, G., O'Donoghue, J., and C. Wallace, "The Entity Attestation Token (EAT)", Work in Progress, Internet-Draft, draft-ietf-rats-eat-31, , <https://datatracker.ietf.org/doc/html/draft-ietf-rats-eat-31>.
[RFC2119]
Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, , <https://www.rfc-editor.org/rfc/rfc2119>.
[RFC7519]
Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token (JWT)", RFC 7519, DOI 10.17487/RFC7519, , <https://www.rfc-editor.org/rfc/rfc7519>.
[RFC8174]
Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, , <https://www.rfc-editor.org/rfc/rfc8174>.
[RFC8392]
Jones, M., Wahlstroem, E., Erdtman, S., and H. Tschofenig, "CBOR Web Token (CWT)", RFC 8392, DOI 10.17487/RFC8392, , <https://www.rfc-editor.org/rfc/rfc8392>.
[RFC9334]
Birkholz, H., Thaler, D., Richardson, M., Smith, N., and W. Pan, "Remote ATtestation procedureS (RATS) Architecture", RFC 9334, DOI 10.17487/RFC9334, , <https://www.rfc-editor.org/rfc/rfc9334>.

10.2. Informative References

[FIRMWARE]
Regenscheid, A., "Platform firmware resiliency guidelines", National Institute of Standards and Technology, DOI 10.6028/nist.sp.800-193, , <https://doi.org/10.6028/nist.sp.800-193>.
[REDFISH]
"Redfish Specification Version 1.20.0", n.d., <https://www.dmtf.org/sites/default/files/standards/documents/DSP0266_1.20.0.pdf>.
[RFC8322]
Field, J., Banghart, S., and D. Waltermire, "Resource-Oriented Lightweight Information Exchange (ROLIE)", RFC 8322, DOI 10.17487/RFC8322, , <https://www.rfc-editor.org/rfc/rfc8322>.

Contributors

Thank you to reviewers and contributors who helped to improve this document. Thank you to Nick Grobelney, Dell Technologies, for your review and contribution to separate out the policy and measurement sets. Thank you, Samant Kakarla and Huijun Xie from Dell Technologies, for your detailed review and corrections on boot process details. Section 3 has been contributed by Rudy Bauer from Dell as well and an author will be added on the next revision. IANA section added in version 7 by Kathleen Moriarty, expanding the claims registered and adding a proposed registry to define policy and measurement sets. Thank you to Henk Birkholz for his review and edits. Thanks to Thomas Fossati, Michael Richardson, and Eric Voit for their detailed reviews on the mailing list. Thank you to A.J. Stein for converting the XMLMind workflow to Markdown and GitHub, editorial contributions, and restructuring of the document.

Authors' Addresses

Kathleen M. Moriarty
Transforming Information Security LLC
MA
United States of America
Monty Wiseman
Beyond Identity
3 Park Avenue
NY, NY 10016
United States of America
A.J. Stein
National Institute of Standards and Technology
100 Bureau Drive
Gaithersburg, MD 20899
United States of America
Chandra Nelogal
Dell Technologies
176 South Street
MA 01748
United States of America