A system is sometimes called ‘cryptographically secure’:
Cryptography benefits from a methodology for rigorous analysis. But if in isolation cryptography is just a ‘means of transferring trust from where it resides to where it is needed’ (Anderson, Security Engineering), how do we find or generate sources of trust?
Attestation is a general framework for this question of ‘just because something was signed, doesn’t mean it’s trusted’. It does not solve the problem of origination of trust, but it makes explicit a mechanism, via measurement, by which statements can be introduced into cryptographic systems, towards definite security goals. What was measured and what did the measuring, under what context, become considerations. A recent single FaceID unlock ‘measurement’ may be enough for a banking app to authorize a small funds transfer, while multiple such biometric unlocks spread out over time may be needed for protecting against a stolen unlocked phone.
Typically signed statements ‘this is alice’ or ‘alice is allowed to do abc’ would be called authentication and authorization, with attestation reserved for proving that a system is trustworthy (‘alice has properties that pass my policy to deem trustworthy’), and most often brought up in the context of confidential computing. Though, the framework is general.
A basic attestation flow is: a trusted component(s) takes measurements of and makes statements about a target system, which can then be used by a relying party to meet the desired security goals.
The component that does the measuring is the attester, and the one that makes the statements is the verifier.
The verifier is sometimes a remote service (‘remote attestation’), so as to take advantage of compute and network resources unavailable to the TCB in which the attester resides. For example, the attester may reside in secure hardware. The verifier on the other hand would like to make policy assertions on the measurements received from the attester. Being remote, it can be secured independently of the target system.
|----------|
| Verifier |
|----------|
|-----------| |--------| |----------------|
| Attester | | Target | | Relying Party |
|-----------| |--------| |----------------|
The canonical example is the certificate authority (CA) in public-key infrastructure. CAs issue certificates in hierarchical fashion that bind (*) a public key to a domain. The CA is attesting that FooBank is the owner of the private key assumed by a browser for establishing online banking sessions secured by TLS.
Questioning the trustworthiness of a CA begets an attestation mechanism for the CA itself. This is the idea behind Certificate Transparency (CT), a public append-only audit log for certificates.
Certificate transparency as described above has a place in the Go Checksum Database, for the analogous problem of software dependency attestations.
The broader need for software dependency attestation is the agenda of the SLSA software supply chain security project. SLSA defines software attestations as:
‘A software attestation, not to be confused with a remote attestation in the trusted computing world, is an authenticated statement (metadata) about a software artifact or collection of software artifacts. Software attestations are a generalization of raw artifact/code signing…A single keyset can express an arbitrary amount of information, including things that are not possible with raw signing. For example, an attestation might state exactly how an artifact was produced, including the build command that was run and all of its dependencies.
For example, an OAuth authorization server issues and cryptographically signs bearer tokens used by a client. In addition to API security threats like theft of bearer token, it would make sense to consider the security posture of the various third-party software artifacts constituting this OAuth workflow, in client as well as server. Are Synk CVE reports enough? Or attestations of build artifacts, maybe with additional context? Or is current, live information warranted, via instrumentation?
App providers care that their apps have not been compromised, for confidentiality and integrity guarantees as well as end user privacy.
Mobile devices, and particularly mobile apps, are acknowledged public clients. The phone security models do have provisions for app isolation and securing app-local data on disk, guarding against external threats. And the lower layers have a chain of trust via a verified/secure boot. App compromise and tampering remains though an appealing and large attack surface, and can be a serious problem. It’s possible to for example trick app-specific cryptographic keys in the secure enclave to sign manipulated data, rendering useless say FaceID-authorized funds transfers in a banking app. (A similar threat vector exists for cryptographic signatures by browser clients, as discussed in this old thread on TLS token binding). A local app attestation suffers from the problem that it’s asking a thief if they are a thief: in a rooted device such a detection mechanism can always be patched over. Ideally such tampering would be detected by remote attestation.
Apple and Android both offer remote attestation APIs, iOS DeviceCheck and Android SafetyNet respectively. The remote server component may access collected data across the device and app ecosystem. They are explicitly described as best-effort only security against the landscape of threats, making it contentious whether this outweighs the constraints they impose on end user freedom and privacy via restricting client permissions and sending identifying data to a server.
This sleight of hand is attestation that isn’t attestation: ‘I hereby certify that I am Emperor of China’. More favorably, it’s analogous to the spectrum of accreditation across the professions; say running for President of the United States vs being a practicing surgeon vs. being a TikTok influencer. Examples are self-signed application certificates and Trust-On-First-Use (TOFU) enabling mechanisms like the ~/.ssh/known_hosts file.