Trust Framework as Information Sharing – A thought experiment

Over the last year, I’ve been thinking about the nature, structure and governance models of Trust Frameworks.

The work that I do with IDESG focused on establishing an ‘Identity Ecosystem’. Which, in effect, means finding ways for existing and new Identity federations, Trust Frameworks and standalone Identity Solutions (the Ecosystem Participants) to exchange information (assertions) with their partners. Ecosystem Participants need to evaluate the risks of accepting the information for use in their decision making processes.

I have closely examined the FICAM Trust Framework Solutions Trust Criteria, NIST standards, the Trust Framework Provider Acceptance Program and Approved Trust Framework Providers’ frameworks, to seek understanding of different approaches to evaluate transaction partners who might become Identity Federation partners. At root, these approaches define requirements that must be met, criteria for conformity evaluation, risk evaluation methods and assessment rules which must be considered when conducting Identity-related online transactions.

A couple years ago, I decided to examine the relationships between components of online Identity Solutions using a very particular lens: the Information Sharing lens. That analysis helped shape conversations with FICAM and Government of Canada about reference architectures and mechanisms to assign roles and responsibilities for identity-related transactions.

I have recently started to immerse myself in the InterPares Trust project:
“InterPARES Trust (ITrust 2013-2018) is a multi-national, interdisciplinary research project exploring issues concerning digital records and data entrusted to the Internet. Its goal is to generate theoretical and methodological frameworks to develop local, national and international policies, procedures, regulations, standards and legislation, in order to ensure public trust grounded on evidence of good governance, a strong digital economy, and a persistent digital memory.” These projects are researching ways to determine digital record authenticity, and other related information management subjects.

What if we look at Trust Frameworks through the information lens?

For this thought experiment, treat everything as an information transmission, processing or storage event. For example, if a user authenticates their credential/token with a verifier, information from the credential/token could be processed, an assertion of ‘logged in’ could be transmitted, and logs stored about the events.

When attempting to transact, subscribers to a Trust Framework seek to:

  • Understand what information is needed of them in order to perform the transaction
  • Do the functions needed to prepare that information and transmit it as needed
  • Specify what information they need, in a way that includes metadata about quality, source, encoding, etc.
  • Acquire the information they need to make transaction or risk decisions
  • Determine the authenticity and sufficiency of the information received, to the degree needed
  • Complete the transaction based on decisions made about the information processed

What if we use the paradigm of Information Sharing Agreements to codify the determinations and statements of ‘need’ in the bullets above?

In my next posts, I will try to look at the sequences of the overall transaction as it relates to information sharing. The information sharing to occur pairwise, under known terms and conditions. In this way, I hope to learn new things about the nature and structure of information sharing agreements covering these transactions.

In this way, I will try to lead the thought experiment through to what Federation Agreements do for participants today, and what model agreements would be of use in an ‘Ecosystem’ trust arrangement.

New cloudy horizons

Getting going with a new adventure – Cloud Computing interoperability and federation standards. Goes with the theme of learning new things and pushing the brain-case into a new configuration.

Two full days of meetings coming up, setting up a plan, schedule and scope for building a cloud computing testbed to explore new stuff. I’m the project manager for an IEEE initiative.

Having been to a few cloud conferences so far this year, I’m learning that the view that the “cloud” is all about moving enterprise data centres into virtual locations is naive. Yes, virtualization is front and centre. But the hardware, how things are physically wired up, the virtualization software configuration, the data centre location on the planet, and other factors directly impact what the paying customer a) is able to ask for and b) receives. Compute and Storage resources are tailored for specific requirements. ‘Big Data’ analytics require certain configurations, Software as a Service needs others.

The commonality appears to be massively distributed resources. Keeping a very large application set in sync is massively difficult. Cloud providers do not give customers instrumentation or visibility into the virtual-physical interface. Rolling out uniform code to non-uniform hardware must be a nightmare to troubleshoot. But it is necessary.

In any case, I’m working with a team that is looking at how cloud service providers could federate and interoperate in the background – sharing resources to meet demand without customers having to handle the complexity of multi-provider resources.

Every day is a new adventure. It’s the best thing about diving into new waters. Stay tuned.

Running for IDESG Plenary Vice-Chair

Next step on the journey into the world of Online Identity – I’m running for Vice-Chair of the IDESG Plenary. For those of you who don’t know, IDESG is a newly-formed non-profit whose mission is to achieve part of the US National Strategy for Trusted Identity in Cyberspace (NSTIC). The working groups of the Plenary are hard at work characterizing the future online identity world and collecting bits and pieces that will be necessary to achieve the vision.

I’ve been very active in a couple of the core committees, committed to helping the organization make progress.

It’s a very complex problem space and organization with an incredible range of priorities, agendas, opinions and viewpoints. In other words, exactly the kind of challenge I like to take on.

Fingers crossed that folks voting agree.

Portable Identity Information and Interoperable Credentials: How will we shift the burden of complexity away from your mom’s keyboard?

Often-cited target states for federated identity and credential solutions include statements like: “Credentials must be interoperable”; “Identity Information must be portable”; “Users must have choice in number, type and source of credential”; “User must have control over disclosure and use of identifying information”; “Usage of credential must not be traceable back to the user, if the user requires it”.

It occurs to me (and I’m certainly not the first person to realize this) that there is a heavy burden of complexity and risk inherent in solution spaces for those kinds of requirements.

Let me explain:

Today’s nasty conglomeration of multiple username/password silos, 2-step authentication systems, 2-factor authentication systems, attribute verifiers (a.k.a. data brokers) and nascent federated credential solutions actually satisfies many of the requirements statements above.

We are witnessing the rise of the “mega-ID Provider”:  Google, Amazon Web Services, PayPal, Salesforce, Facebook, Twitter and other massive companies are turning up authentication interfaces for consumption by other eService Providers and Relying Parties. They are not particularly interoperable – the NASCAR user interface used to pick your Authentication Provider is proof of this. (Sidebar: I was just informed that the NASCAR is called the NASCAR because of the long line of logos streaming down the UX – I found this tragic and funny at the same time)

What solutions are being promoted to shift the burden of complexity and non-secure credentials away from your mom? (this list is not pure – I’ve shifted some definitions to suit my purposes)

Hubs: an interconnection point that does protocol and information format conversions between many Relying Parties and many ID Providers. This might possibly be IDaaS.

Brokers: a Hub that also offers anonymizing services – directed identifiers provided to RP and IDP in a way that makes it very difficult to capture a comprehensive picture of where a user credential has been used, even with some collusion.

Federated Credentials: IDP and RP using a commonly-agreed set of protocols, policies and trust rituals. Very Enterprise-y where a user is bound to an IDP but in return is able to authenticate anywhere in the Federation.

Active User Agents: User Centric solutions that keep the keys, authorization policies and other complex stuff close to the user. User Agents could collect up a bunch of different ‘identities’ and credentials for use in whatever pattern the user desires.

Personal Clouds: Bits of Personal Cloud functionality could be the Active User Agent role, but cloud based.

So what’s it going to be?

Is the price of convenience and security for you as an Online Consumer-Citizen going to be a transfer of the ‘hard parts’ and complexity over to big Broker/Hubs that promise to do no harm? This might address the harder problems of discovery and provisioning – centralizing integration points is easier to deploy.

Or, will the complexity simply be shifted just a little bit further away from your chair into a User Agent that is under your direction? This gives you more (apparent) control, but makes it harder to get seamless, simple services connected when and where you want them – and decentralized integration will be prone to the problems of today with provisioning, deprovisioning and broken linkages.