Where do standards come from and how are they used for online services?

Products or services are designed to meet a need. In most cases, many different products or services are designed to meet the same need or purpose. How can a supplier or consumer determine if a product or service meets the need? And how to compare the quality of different but similar products or services?

In a word: standardization!

In a few more words: Standards specify the requirements that must be met by a product or service to be considered as meeting the need.

Standards of practice and technical standards are typically developed by trade associations, often due to regulatory requirements or the need for many different products or services to work together or inter-operate. Standards reflect the agreed-upon requirements.

For example, did you realize that brake lights on cars use exactly the same red color? There’s a standard for that: SAE J 578. It specifies the requirements for “red” based on visibility, driver reaction, confusion with other colors used in car lights and other factors.

Products or services are assessed for conformity to the standard by using conformity assessment criteria. These criteria are written in a way that allows an assessor to determine if the product meets or does not meet the requirements described in the standard. Assessments are repeated from time to time to ensure the product or service continues to conform to the standard. You can imagine that the criteria and requirements could be very detailed and precise or looser depending on the intent of the standards creators.

Standards exist for a very broad range of products and services. In the simplest terms, standards are created whenever there is a need for a level of quality, consistency or safety. Standards can be developed by a single producer, an association, a country or amongst many countries. In fact, international standards underpin many aspects of international trade.

In the world of trusted use of personal data and identity, conformity to standards plays an important role. Standards exist for the secure communications protocols, methods for use of encryption, strength of authenticators and mechanisms for servers to identify each other, among many others. The infrastructure of authentication, authorization and identity technologies relies on standards for anything to work together.

So, if you’ve ever wondered how products and services from different producers can work together without major hassles, they have probably been assessed for conformity to standards. Think about how many apps, web sites, systems and organizations communicate on the internet – it’s all based on standards.

Advertisements

ID Information Originators and Aggregators – If an RP can sue, should they care about Certification?

Warning: this post is loaded with jargon! Read at your own risk!
In the Kantara Initiative Identity Assurance Working Group today, we were discussing elements and patterns needed in the model for the credential-identity separation.
Spent lots of time discussing the idea of a “Information Originator” versus a “Identity Aggregator” (not in the sense of a data broker) a.k.a. an Identity Manager or Attribute Manager in the ‘decoupled binding’ model mentioned in a previous post.
The concept is simple: that an Originator establishes the first recorded version of attribute information. An Aggregator records an assertion of information from other sources like Originators, the Person or other Aggregators.
For example: a Passport agency is an Originator for the passport number, and the digital photograph associated with that number. A DMV is Originator for the drivers license number and photograph and physical description, but is an aggregator for the Name, and address that appear on that license, because they get that information from elsewhere and may or may not be able to know if it is a ‘true copy’ of the original.
The sticky point for a Trust Framework Provider is: to support a high assurance level for information assertions, does the TFP require that all attribute information asserted by an IDP or Attribute Provider  originate from a TFP-Certified Originator and all parties between that Originator and the last IDP?
This is another way of expressing the idea that in order for high LOA, in the chain from origination of information to assertion of information to an RP, all providers must be certified to that trust framework.
This does not appear to be very Internet-y (large scale, agile, rapid evolution).
This also does not match the current practice of accepting Government Photo ID to establish LOA3 identity information at an IDP – since the ID card is taken at face value and originates from an un-certified entity (e.g. the DMV).
So, what is the answer? Does an RP need to consume information about processes used to collect and aggregate identity attributes? Do they need this from all providers in the chain? or just the last provider (who must be certified)?
Does the certification process force the last provider to take on sufficient accountability for liability for the trust model to work?
e.g. if an IDP chooses to obtain identity proof from a partner that is not certified, but the IDP is willing to take on sufficient accountability to compensate for this process – should the Trust Framework Provider permit the IDP to be certified?

Or does this all just mean that it’s up to that poor Relying Party to seek out enough attribute sources to establish identity information sufficient to make an access control decision?

Evolution of a Trusted Identity Model

In the Fall of 2012, I led the development and publication of a discussion paper for the Kantara Initiative Identity Assurance Working Group. The paper explored the concept of a general model for the Credential Provider – Identity Provider – Online Service Provider architecture.

I identified several abstract Roles, Actors, Functions and Relationships that appeared in many of the identity frameworks and architectures, then proceeded to create a general model showing how everything was related.

General Model of Credential-Identity Manager Role Separation

The paper received some interest when presented to a Kantara Initiative workshop in late 2012. It has also sparked renewed interest and expansion from Anil John, as he explores the concept and implications of Credential Manager and Identity Manager separation, in particular as it relates to US FICAM Trust Frameworks and NIST 800-63-1/2.

In several posts, Anil has refined the original general model in interesting ways, and now uses it to test ideas on anonymity, privacy, separation of roles and other topics.

The refined model looks like:

eAuth_Model_General

The names of the “Manager” roles have been changed to align to NIST 800-63 language (I was using terms typically used in Canada for these roles), and an explicit manager for the Token-Identity Link Record has been created (this becomes the ‘Credential Manager’ in future revisions.

These models have proven to be very effective in communicating to our peers about this topic. But there is unfinished business.

One of the original goals of the Kantara paper was to map out the Functions that could be assigned to Roles, and to dig deeper into the relationships between the Roles. The time has come to start this work.

It seems that several big topics are being debated in the Trusted Identity field these days:

  • Attribute providers are the real ‘information emitters’ – there’s no such thing as ‘Identity Providers’ – so how should Attribute providers be evaluated versus Level of Assurance schemes?
  • How can a Trust Framework Provider describe assessment criteria, then have an assessment performed reliably, when Attribute Providers and “Identity Oracles” proliferate?
  • What are the implications of aggregating attributes from several providers as it relates to assurance and confidence in accuracy and validity of the information?
  • How could subsets of the Roles and Functions in the model be aggregated dynamically (or even statically) to satisfy the identity assurance risk requirements of an RP?

A small team has been assembled under the auspices of the Kantara IAWG, and I hope to lead us through the conversation and debate as we extend the general model and explore it in light of temporal events, LOA mappings, Trust Framework comparability mappings, and core Trusted Identity Functions. Once we have figured out a richer general model, the meatier discussion can begin: which elements, agglomerations, and segments of the model can be assessed and issued a Trusted Identity Trustmark in a meaningful way – one that is valued by service providers and consumers.

The Trustmark question cannot be addressed until the general model is elaborated – and the outcome will be a greater community understanding of the nature of attribute managers, LOA and how attributes could be trusted in the to-be-invented identity attributes universe.

More to come…