Where do standards come from and how are they used for online services?

Products or services are designed to meet a need. In most cases, many different products or services are designed to meet the same need or purpose. How can a supplier or consumer determine if a product or service meets the need? And how to compare the quality of different but similar products or services?

In a word: standardization!

In a few more words: Standards specify the requirements that must be met by a product or service to be considered as meeting the need.

Standards of practice and technical standards are typically developed by trade associations, often due to regulatory requirements or the need for many different products or services to work together or inter-operate. Standards reflect the agreed-upon requirements.

For example, did you realize that brake lights on cars use exactly the same red color? There’s a standard for that: SAE J 578. It specifies the requirements for “red” based on visibility, driver reaction, confusion with other colors used in car lights and other factors.

Products or services are assessed for conformity to the standard by using conformity assessment criteria. These criteria are written in a way that allows an assessor to determine if the product meets or does not meet the requirements described in the standard. Assessments are repeated from time to time to ensure the product or service continues to conform to the standard. You can imagine that the criteria and requirements could be very detailed and precise or looser depending on the intent of the standards creators.

Standards exist for a very broad range of products and services. In the simplest terms, standards are created whenever there is a need for a level of quality, consistency or safety. Standards can be developed by a single producer, an association, a country or amongst many countries. In fact, international standards underpin many aspects of international trade.

In the world of trusted use of personal data and identity, conformity to standards plays an important role. Standards exist for the secure communications protocols, methods for use of encryption, strength of authenticators and mechanisms for servers to identify each other, among many others. The infrastructure of authentication, authorization and identity technologies relies on standards for anything to work together.

So, if you’ve ever wondered how products and services from different producers can work together without major hassles, they have probably been assessed for conformity to standards. Think about how many apps, web sites, systems and organizations communicate on the internet – it’s all based on standards.

Why is online identity such a hot subject? Thoughts on current trends.

Interesting trends are emerging in the Online Identity circles I travel. One trend is the shift away from formalized, enterprise, centrally-regulated Federations towards ad-hoc, consumption-driven, transaction-oriented architectures and policies.
On one hand, I see advancement in ‘formal Federation’, which I will loosely define as a pre-evaluated structuring of centrally defined requirements and criteria, realized as a collection of entities who share common policy, technology and security domain boundaries. It’s the ‘classic’ use case where one enterprise accepts a business partner enterprise’s user authentication.
On the other hand, there is a large thought-community emerging where the central concern is the ‘transaction’: the interaction between online service provider and online service recipient. The transaction-oriented requirements dictate that risk, qualifications and identification should be evaluated at ‘run time’ rather than at ‘design time’. Themes include Attributed Based Access Control, User-managed access, electronic consent notices, step-up authentication and other ‘assertion’ evaluation schemes.
This is a ‘frothy’ situation – there are no pervasive solutions and everyone is attempting to locate the critical problem space which needs fixing. Arguments and philosophical approaches erupt frequently – which is stimulating more energy and effort. A good thing.
One challenge that ‘pure’ Federation cannot address is the ad-hoc nature of online interactions. Predicting which identification authorities and policy enforcers are needed in advance is a wicked problem and is likely to be intractable. However, establishing common protocols and information sharing processes could possibly allow transaction participants to discover enough about their opposite number to do the risk/cost/benefit analysis that us purist security/identity practitioners claim that they always do. (of course nobody actually does this – they do stuff based on past experience and retain the right to investigate suspicious activity.)
OK, so what’s my point?
Simply the axiom that centralization is good for creating the nucleus of common practices required to build critical mass; but the scale and rapid expansion seen in internet-scale technologies and approaches can only occur when the common practices are subsumed into standards and the innovators are thus freed to do unexpected things upon a solid, standardized foundation.
A somewhat strange post, but I think the ‘identity’ industry is currently in the throes of transitioning from ‘central’ to ‘standardized’ even before fully developing what those common practices must be: and that’s why it is such an interesting space to be engaged.

Who Sez? An Alternative ID Ecosystem Viewpoint

A debate continues to simmer in IDESG about the nature of an Identity Ecosystem; there are many points of view and opinions. For an organization like IDESG, a unified concept must emerge.

Approaches to define the Identity Ecosystem include: setting a baseline of requirements that must be met by ecosystem participants; definition of trust marks; Internet nutrition labels; establishing a formal certification process; creating the ability for ecosystem participants to advertise their security, privacy and other capabilities; functional models; interaction models; protection levels; assurance levels; self-selection; independent attestation; and the list goes on.

Let me suggest and explore another way to perceive and describe an ID Ecosystem aligned with the NSTIC Guiding Principles.

A common thread that links the above-listed approaches together is that each one declares one or more Authorities that determine if the prospective ecosystem participant’s policies, technologies and operations are compatible with the NSTIC Vision.

Following this line of thought, envision a governance structure that seeks to influence the behaviour of the Authorities that grant rights of access and recognition to ecosystem participants. Rather than IDESG attempting to define and quantify every action, implementation or intent in the entire ecosystem, why not define the rules of being an Authority and certify those entities wishing to be Authorities?

What would an Authorities-driven Identity Ecosystem look like?
(and, remember that what follows is a rosy view biased to my ways of conceptualizing the ecosystem)

  • Authorities exist that can certify or recognize ecosystem participants that adhere to that Authority’s rules
    • Authorities create and manage rules that are compatible with NSTIC Principles
  • Authorities become authoritative when a) they demonstrate rule compatibility to an IDESG Authority and b) entities decide to follow their rules and ‘join’
  • Each Authority would decide how to certify or recognize their participants – some might use formal Trust Framework Provider methods, others might allow self-declaration: the choices would be defined in their rules
  • Each ecosystem participant, including the Authorities, is accountable to the Authorities whose rules they follow

Some issues remain:

  • Who sez that the IDESG Authority has any sway over any other entity?

Is this description simply a description of the current internet using different words?

I think it is a new thing because it focuses on approaches to make the IDESG organization and members credible and influential in the internet space.

In order for an IDESG Authority to have any value, a critical mass of significant influential entities must join and put their weight behind the organization. If a critical mass of interesting organizations choose to acknowledge this web of authorities and ways to align to the NSTIC Principles, we begin the positive cycle.

It’s a big ‘IF’. And it’s one of the key assumptions under which IDESG, Inc. was formed.

My opinion: focus the work of IDESG on making it easy for organizations and individuals to demonstrate support for NSTIC. NSTIC’s vision is a good one. Build on existing goodwill to build a source of co-recognition. Reinforce the positive actions that IDESG members take: by getting certifications; by doing regular assessments; by actively protecting privacy and security. IDESG should use inclusive selection methods to increase membership and to create a self-enforcing or peer-enforcing environment:

  • Delegate and distribute ‘authority’ to the communities, federations and other self-identifying groups
  • Avoid over-specification, bureaucracy, self-limiting approaches, command-and-control centralization
  • Educate participants, develop and give them the tools to make informed decisions about the agreements and interactions they wish to use. Make it easier for others to educate about good practices and dangers. Make the safest options easiest.
  • Minimize barriers to entry
  • Reinforce positive behaviours
  • Build goodwill and peer value

Trust Framework as Information Sharing – A thought experiment

Over the last year, I’ve been thinking about the nature, structure and governance models of Trust Frameworks.

The work that I do with IDESG focused on establishing an ‘Identity Ecosystem’. Which, in effect, means finding ways for existing and new Identity federations, Trust Frameworks and standalone Identity Solutions (the Ecosystem Participants) to exchange information (assertions) with their partners. Ecosystem Participants need to evaluate the risks of accepting the information for use in their decision making processes.

I have closely examined the FICAM Trust Framework Solutions Trust Criteria, NIST standards, the Trust Framework Provider Acceptance Program and Approved Trust Framework Providers’ frameworks, to seek understanding of different approaches to evaluate transaction partners who might become Identity Federation partners. At root, these approaches define requirements that must be met, criteria for conformity evaluation, risk evaluation methods and assessment rules which must be considered when conducting Identity-related online transactions.

A couple years ago, I decided to examine the relationships between components of online Identity Solutions using a very particular lens: the Information Sharing lens. That analysis helped shape conversations with FICAM and Government of Canada about reference architectures and mechanisms to assign roles and responsibilities for identity-related transactions.

I have recently started to immerse myself in the InterPares Trust project:
“InterPARES Trust (ITrust 2013-2018) is a multi-national, interdisciplinary research project exploring issues concerning digital records and data entrusted to the Internet. Its goal is to generate theoretical and methodological frameworks to develop local, national and international policies, procedures, regulations, standards and legislation, in order to ensure public trust grounded on evidence of good governance, a strong digital economy, and a persistent digital memory.” These projects are researching ways to determine digital record authenticity, and other related information management subjects.

What if we look at Trust Frameworks through the information lens?

For this thought experiment, treat everything as an information transmission, processing or storage event. For example, if a user authenticates their credential/token with a verifier, information from the credential/token could be processed, an assertion of ‘logged in’ could be transmitted, and logs stored about the events.

When attempting to transact, subscribers to a Trust Framework seek to:

  • Understand what information is needed of them in order to perform the transaction
  • Do the functions needed to prepare that information and transmit it as needed
  • Specify what information they need, in a way that includes metadata about quality, source, encoding, etc.
  • Acquire the information they need to make transaction or risk decisions
  • Determine the authenticity and sufficiency of the information received, to the degree needed
  • Complete the transaction based on decisions made about the information processed

What if we use the paradigm of Information Sharing Agreements to codify the determinations and statements of ‘need’ in the bullets above?

In my next posts, I will try to look at the sequences of the overall transaction as it relates to information sharing. The information sharing to occur pairwise, under known terms and conditions. In this way, I hope to learn new things about the nature and structure of information sharing agreements covering these transactions.

In this way, I will try to lead the thought experiment through to what Federation Agreements do for participants today, and what model agreements would be of use in an ‘Ecosystem’ trust arrangement.

Portable Identity Information and Interoperable Credentials: How will we shift the burden of complexity away from your mom’s keyboard?

Often-cited target states for federated identity and credential solutions include statements like: “Credentials must be interoperable”; “Identity Information must be portable”; “Users must have choice in number, type and source of credential”; “User must have control over disclosure and use of identifying information”; “Usage of credential must not be traceable back to the user, if the user requires it”.

It occurs to me (and I’m certainly not the first person to realize this) that there is a heavy burden of complexity and risk inherent in solution spaces for those kinds of requirements.

Let me explain:

Today’s nasty conglomeration of multiple username/password silos, 2-step authentication systems, 2-factor authentication systems, attribute verifiers (a.k.a. data brokers) and nascent federated credential solutions actually satisfies many of the requirements statements above.

We are witnessing the rise of the “mega-ID Provider”:  Google, Amazon Web Services, PayPal, Salesforce, Facebook, Twitter and other massive companies are turning up authentication interfaces for consumption by other eService Providers and Relying Parties. They are not particularly interoperable – the NASCAR user interface used to pick your Authentication Provider is proof of this. (Sidebar: I was just informed that the NASCAR is called the NASCAR because of the long line of logos streaming down the UX – I found this tragic and funny at the same time)

What solutions are being promoted to shift the burden of complexity and non-secure credentials away from your mom? (this list is not pure – I’ve shifted some definitions to suit my purposes)

Hubs: an interconnection point that does protocol and information format conversions between many Relying Parties and many ID Providers. This might possibly be IDaaS.

Brokers: a Hub that also offers anonymizing services – directed identifiers provided to RP and IDP in a way that makes it very difficult to capture a comprehensive picture of where a user credential has been used, even with some collusion.

Federated Credentials: IDP and RP using a commonly-agreed set of protocols, policies and trust rituals. Very Enterprise-y where a user is bound to an IDP but in return is able to authenticate anywhere in the Federation.

Active User Agents: User Centric solutions that keep the keys, authorization policies and other complex stuff close to the user. User Agents could collect up a bunch of different ‘identities’ and credentials for use in whatever pattern the user desires.

Personal Clouds: Bits of Personal Cloud functionality could be the Active User Agent role, but cloud based.

So what’s it going to be?

Is the price of convenience and security for you as an Online Consumer-Citizen going to be a transfer of the ‘hard parts’ and complexity over to big Broker/Hubs that promise to do no harm? This might address the harder problems of discovery and provisioning – centralizing integration points is easier to deploy.

Or, will the complexity simply be shifted just a little bit further away from your chair into a User Agent that is under your direction? This gives you more (apparent) control, but makes it harder to get seamless, simple services connected when and where you want them – and decentralized integration will be prone to the problems of today with provisioning, deprovisioning and broken linkages.