Portable Identity Information and Interoperable Credentials: How will we shift the burden of complexity away from your mom’s keyboard?

Often-cited target states for federated identity and credential solutions include statements like: “Credentials must be interoperable”; “Identity Information must be portable”; “Users must have choice in number, type and source of credential”; “User must have control over disclosure and use of identifying information”; “Usage of credential must not be traceable back to the user, if the user requires it”.

It occurs to me (and I’m certainly not the first person to realize this) that there is a heavy burden of complexity and risk inherent in solution spaces for those kinds of requirements.

Let me explain:

Today’s nasty conglomeration of multiple username/password silos, 2-step authentication systems, 2-factor authentication systems, attribute verifiers (a.k.a. data brokers) and nascent federated credential solutions actually satisfies many of the requirements statements above.

We are witnessing the rise of the “mega-ID Provider”:  Google, Amazon Web Services, PayPal, Salesforce, Facebook, Twitter and other massive companies are turning up authentication interfaces for consumption by other eService Providers and Relying Parties. They are not particularly interoperable – the NASCAR user interface used to pick your Authentication Provider is proof of this. (Sidebar: I was just informed that the NASCAR is called the NASCAR because of the long line of logos streaming down the UX – I found this tragic and funny at the same time)

What solutions are being promoted to shift the burden of complexity and non-secure credentials away from your mom? (this list is not pure – I’ve shifted some definitions to suit my purposes)

Hubs: an interconnection point that does protocol and information format conversions between many Relying Parties and many ID Providers. This might possibly be IDaaS.

Brokers: a Hub that also offers anonymizing services – directed identifiers provided to RP and IDP in a way that makes it very difficult to capture a comprehensive picture of where a user credential has been used, even with some collusion.

Federated Credentials: IDP and RP using a commonly-agreed set of protocols, policies and trust rituals. Very Enterprise-y where a user is bound to an IDP but in return is able to authenticate anywhere in the Federation.

Active User Agents: User Centric solutions that keep the keys, authorization policies and other complex stuff close to the user. User Agents could collect up a bunch of different ‘identities’ and credentials for use in whatever pattern the user desires.

Personal Clouds: Bits of Personal Cloud functionality could be the Active User Agent role, but cloud based.

So what’s it going to be?

Is the price of convenience and security for you as an Online Consumer-Citizen going to be a transfer of the ‘hard parts’ and complexity over to big Broker/Hubs that promise to do no harm? This might address the harder problems of discovery and provisioning – centralizing integration points is easier to deploy.

Or, will the complexity simply be shifted just a little bit further away from your chair into a User Agent that is under your direction? This gives you more (apparent) control, but makes it harder to get seamless, simple services connected when and where you want them – and decentralized integration will be prone to the problems of today with provisioning, deprovisioning and broken linkages.

Advertisements

ID Information Originators and Aggregators – If an RP can sue, should they care about Certification?

Warning: this post is loaded with jargon! Read at your own risk!
In the Kantara Initiative Identity Assurance Working Group today, we were discussing elements and patterns needed in the model for the credential-identity separation.
Spent lots of time discussing the idea of a “Information Originator” versus a “Identity Aggregator” (not in the sense of a data broker) a.k.a. an Identity Manager or Attribute Manager in the ‘decoupled binding’ model mentioned in a previous post.
The concept is simple: that an Originator establishes the first recorded version of attribute information. An Aggregator records an assertion of information from other sources like Originators, the Person or other Aggregators.
For example: a Passport agency is an Originator for the passport number, and the digital photograph associated with that number. A DMV is Originator for the drivers license number and photograph and physical description, but is an aggregator for the Name, and address that appear on that license, because they get that information from elsewhere and may or may not be able to know if it is a ‘true copy’ of the original.
The sticky point for a Trust Framework Provider is: to support a high assurance level for information assertions, does the TFP require that all attribute information asserted by an IDP or Attribute Provider  originate from a TFP-Certified Originator and all parties between that Originator and the last IDP?
This is another way of expressing the idea that in order for high LOA, in the chain from origination of information to assertion of information to an RP, all providers must be certified to that trust framework.
This does not appear to be very Internet-y (large scale, agile, rapid evolution).
This also does not match the current practice of accepting Government Photo ID to establish LOA3 identity information at an IDP – since the ID card is taken at face value and originates from an un-certified entity (e.g. the DMV).
So, what is the answer? Does an RP need to consume information about processes used to collect and aggregate identity attributes? Do they need this from all providers in the chain? or just the last provider (who must be certified)?
Does the certification process force the last provider to take on sufficient accountability for liability for the trust model to work?
e.g. if an IDP chooses to obtain identity proof from a partner that is not certified, but the IDP is willing to take on sufficient accountability to compensate for this process – should the Trust Framework Provider permit the IDP to be certified?

Or does this all just mean that it’s up to that poor Relying Party to seek out enough attribute sources to establish identity information sufficient to make an access control decision?

Evolution of a Trusted Identity Model

In the Fall of 2012, I led the development and publication of a discussion paper for the Kantara Initiative Identity Assurance Working Group. The paper explored the concept of a general model for the Credential Provider – Identity Provider – Online Service Provider architecture.

I identified several abstract Roles, Actors, Functions and Relationships that appeared in many of the identity frameworks and architectures, then proceeded to create a general model showing how everything was related.

General Model of Credential-Identity Manager Role Separation

The paper received some interest when presented to a Kantara Initiative workshop in late 2012. It has also sparked renewed interest and expansion from Anil John, as he explores the concept and implications of Credential Manager and Identity Manager separation, in particular as it relates to US FICAM Trust Frameworks and NIST 800-63-1/2.

In several posts, Anil has refined the original general model in interesting ways, and now uses it to test ideas on anonymity, privacy, separation of roles and other topics.

The refined model looks like:

eAuth_Model_General

The names of the “Manager” roles have been changed to align to NIST 800-63 language (I was using terms typically used in Canada for these roles), and an explicit manager for the Token-Identity Link Record has been created (this becomes the ‘Credential Manager’ in future revisions.

These models have proven to be very effective in communicating to our peers about this topic. But there is unfinished business.

One of the original goals of the Kantara paper was to map out the Functions that could be assigned to Roles, and to dig deeper into the relationships between the Roles. The time has come to start this work.

It seems that several big topics are being debated in the Trusted Identity field these days:

  • Attribute providers are the real ‘information emitters’ – there’s no such thing as ‘Identity Providers’ – so how should Attribute providers be evaluated versus Level of Assurance schemes?
  • How can a Trust Framework Provider describe assessment criteria, then have an assessment performed reliably, when Attribute Providers and “Identity Oracles” proliferate?
  • What are the implications of aggregating attributes from several providers as it relates to assurance and confidence in accuracy and validity of the information?
  • How could subsets of the Roles and Functions in the model be aggregated dynamically (or even statically) to satisfy the identity assurance risk requirements of an RP?

A small team has been assembled under the auspices of the Kantara IAWG, and I hope to lead us through the conversation and debate as we extend the general model and explore it in light of temporal events, LOA mappings, Trust Framework comparability mappings, and core Trusted Identity Functions. Once we have figured out a richer general model, the meatier discussion can begin: which elements, agglomerations, and segments of the model can be assessed and issued a Trusted Identity Trustmark in a meaningful way – one that is valued by service providers and consumers.

The Trustmark question cannot be addressed until the general model is elaborated – and the outcome will be a greater community understanding of the nature of attribute managers, LOA and how attributes could be trusted in the to-be-invented identity attributes universe.

More to come…