Kantara Initiative Work Groups on Data Sharing and Consent

Updated November 11, 2018

Thoughts about user data agreements on the internet

As I have been talking with more people and companies about the concepts of ‘consent receipt’ and how Kantara Initiative has developed it, I have been looking for a better framework within which to grow the concept and plan out future enhancements. This article describes what I have discovered and sets out some ideas for forward planning.

Kantara Initiative Work Groups on Data Sharing and Consent

KantaraInitiative.org is the global consortium improving trustworthy use of identity and personal data through innovation, standardization and good practice. To this end, we have two Work Groups established to treat information sharing, and consent management topics.

The Consent & Information Sharing WG (CIS WG) is one of the original work groups that has existed continuously since Kantara began 10 years ago. It is the home of work like the Standard Data Label, User Submitted Terms, the Kantara Consent Receipt Specification v1.1, and other projects and research related to information sharing from the person’s point of view.

The Consent Management Solutions WG (CMS WG) was started in 2017/2018 to create a library of consent management practices related to agreements to process personal data.

The two work groups are addressing different aspects of the same topic: how to legitimately empower the individual to make decisions about what personal data they wish to provide to organizations, and to give organizations tools to assist the individual in making these choices.

Planning for Future Work Topics

In order to plan for future work in the work groups, I propose to take a step back and look at the broader context of the work at Kantara. The CIS WG has been entirely focused on developing and publishing the Consent Receipt Specification for the last couple of years. Now we need to see if the receipt concept remains fit for purpose, and what adjustments are needed.

Finding the Right ‘Scaffold’ to Examine the Work Plans

Broadly, the two Kantara work groups are dealing with how individuals and organizations should act when the individual agrees to give data to the organization, or when the organization gets data about an individual. To limit the scope of discussion, the CIS WG decided to focus on the lawful basis of ‘data subject consent’ as described in GDPR and similar statutes.

But really, we are examining the agreement between data subject and data controller[1].

When a data controller offers services or products to a data subject, they typically specify terms of service and the consideration required in exchange for that service or product. The consideration might be financial, or it might be collection of the data subject’s personal data [NOTE: The work group pointed out to me this week that this characterization of personal data as a form of currency is incorrect. The data provided must be necessary for provision of the service – it is not to be considered as a trade-off]. The data subject is prompted to accept the terms, or to acknowledge the acceptance by continuing to use the service or product.

Thinking about this more deeply, I realized that we might be able to borrow concepts from contract law in common law to provide ‘scaffolding’ upon which to examine our work. This is obviously not a novel idea, but it has taken me a while to realize it.

Note that while I describe concepts and use terminology from contract law, I don’t believe that all data collection and processing agreements are necessarily contracts. [NOTE: AND the ‘valuable consideration’ described below is not ‘personal data’]

Basic Concepts in Contract Law

A contract is an agreement giving rise to obligations which are enforced or recognized by law[2].

There are three main activities required to enter into a contract: an agreement (a ‘meeting of minds’ consisting of an offer and acceptance); an intention to create a legally binding agreement; and consideration (‘something of value’ which is given for a promise and is required in order to make the promise enforceable as a contract) in both directions.

Now, relate this to the paragraph in the previous section:

When a data controller offers services or products to a data subject, they typically specify terms of service and the consideration required in exchange for that service or product. The consideration might be financial. or it might be collection of the data subject’s personal data. The data subject is prompted to accept the terms, or to acknowledge the acceptance by continuing to use the service or product.

I note with interest that the GDPR Articles and Recitals also use similar terms when describing the interaction.

Now, let’s examine the work plan in light of this idea of using contract concepts.

The “Data Collection and Processing Agreement” Concept

Consider the diagram which is a representation of the same text in the previous section:


We can now ask questions about each ‘terminal leaf’ phrase:

  • Is it clear to all parties that they are actually or effectively entering into a legal contract, even though the user experience may not look like a legal contract (there is no signature ceremony)?
  • Is this arrangement fair and reasonable?
  • Is there a power imbalance between the parties?
  • Are the rights and obligations clear to the parties?
  • Are the implications of the agreement and the consideration communicated clearly?
  • Do both parties have the same opportunities for record-keeping?
  • How are updates and changes managed?
  • Is broad interoperability at this segment desired?
  • Does the party have sufficient information to exercise their rights at a later time, or to change their mind?

We can also position work group publications and deliverables as tools or remedies that modify the answers to those questions.

For example, the Kantara consent receipt concept and specification address, among others, the ‘same opportunities for record-keeping’ and ‘sufficient information to exercise rights’ questions.

This approach should give the WG participants a way to identify areas needing work, and to articulate the rationale for doing that work.

Requirements Arising from Regulations

Note that the previous section does not explicitly call out the lawful basis of ‘consent’.

In order to apply this analysis correctly, we must include analysis of regulatory requirements and the obligations that regulations place on each party in the agreement.

Consider the following diagram which (loosely) describes GDPR-sourced requirements of the data controller:


These requirements stipulate what must be communicated in the ‘offer’ stage of the agreement. It also stipulates some required elements in the terms of the agreement. GDPR also hints at what ‘acceptance’ should look like (particularly in the situation of ‘consent’).

Next Steps

Join us! If this article interests you, and you want to help us influence the future of interactions between you and internet companies, these Kantara work groups are the place to do it! There is not cost but you are required to complete a Group Participation Agreement related to intellectual property rights.

The Consent & Information Sharing work group is currently deciding what the next pieces of work should be. I hope to convince everyone that this analysis approach will yield good results.


[1] I will use GDPR terminology in this article for consistency – the argument applies to most privacy and data protection legislation and regulatory environments.

[2]The material about contract law is derived from “At a Glance Guide to Basic Principles of English Contract Law”, Advocates for International Development, undated. Accessed October 2018. http://www.a4id.org/wp-content/uploads/2016/10/A4ID-english-contract-law-at-a-glance.pdf


What’s in a Trust Framework?

I’ve been having long email discussions over the last month or so about the nature of “Trust Frameworks”. “Trust Framework” is one of those terms that everyone loves to use in the Federated Identity world, but when asked, are hard pressed to come up with a reasonable explanation of what they are and how they work. Equally puzzling is the difficulty in discerning a difference between a Trust Framework and Federated Identity agreements.

There are a few characterizations that I like, especially the American Bar Association Identity Task Force one that a Trust Framework is a set of Operating Rules that include Technical, Business and Legal rules.

I am currently working on a set of white papers that dig into this topic for the Kantara Initiative Identity Assurance Working Group (KI IAWG)

The questions under examination are:

Does the Kantara Identity Assurance Framework (KI IAF) address the needs of service providers seeking Approval under the Kantara Trust Framework?

If not, then what could or should be changed in the Assessment Scheme, the Service Assessment Criteria or Rules of Assessment that would make the KI IAF more flexible or modular in ways that still make sense for Trusted Identity Federations.

Along the way, I have been sidetracked to learn more about Trust Frameworks, Identity Federations and Federated Access Control patterns and architectures. I’ll be sharing bits and pieces of this over the next few blog posts.

But for now, I’ll leave you with a crappy diagram. This diagram is a hypothetical representation of the relationship between the Kantara IAF, Federated Identity models and the assessment criteria that are derived from the underlying models. The premise is: there exist many Federated Identity models. For each model, a set of assessment criteria can be developed. If a service provider demonstrates  conformity to the criteria, then it adheres that that model. It may be possible to describe a “Root” or “Core” model with “Root” or “Core” assessment criteria which form a part of every other model and criteria in that family tree of Federated Identity Models.

In this discussion, it is important to remember that there is more than one way to model Federated Identity. In the circles that I travel at the moment, the US E-Authentication Program (OMB M-04-04 and NIST SP 800-63-2) and the Government of Canada Identity Management models are prevalent. These share common elements such as having 4 Assurance Levels and have differences such as the assignment of functional roles to IDP or RP. However, it is quite simple to describe a common root model with distinct model options and assessment criteria profiles that encompasses both.

Hypothetical relationship between Federated Identity Model and Trust Framework

TF Root Model and SAC

ID Information Originators and Aggregators – If an RP can sue, should they care about Certification?

Warning: this post is loaded with jargon! Read at your own risk!
In the Kantara Initiative Identity Assurance Working Group today, we were discussing elements and patterns needed in the model for the credential-identity separation.
Spent lots of time discussing the idea of a “Information Originator” versus a “Identity Aggregator” (not in the sense of a data broker) a.k.a. an Identity Manager or Attribute Manager in the ‘decoupled binding’ model mentioned in a previous post.
The concept is simple: that an Originator establishes the first recorded version of attribute information. An Aggregator records an assertion of information from other sources like Originators, the Person or other Aggregators.
For example: a Passport agency is an Originator for the passport number, and the digital photograph associated with that number. A DMV is Originator for the drivers license number and photograph and physical description, but is an aggregator for the Name, and address that appear on that license, because they get that information from elsewhere and may or may not be able to know if it is a ‘true copy’ of the original.
The sticky point for a Trust Framework Provider is: to support a high assurance level for information assertions, does the TFP require that all attribute information asserted by an IDP or Attribute Provider  originate from a TFP-Certified Originator and all parties between that Originator and the last IDP?
This is another way of expressing the idea that in order for high LOA, in the chain from origination of information to assertion of information to an RP, all providers must be certified to that trust framework.
This does not appear to be very Internet-y (large scale, agile, rapid evolution).
This also does not match the current practice of accepting Government Photo ID to establish LOA3 identity information at an IDP – since the ID card is taken at face value and originates from an un-certified entity (e.g. the DMV).
So, what is the answer? Does an RP need to consume information about processes used to collect and aggregate identity attributes? Do they need this from all providers in the chain? or just the last provider (who must be certified)?
Does the certification process force the last provider to take on sufficient accountability for liability for the trust model to work?
e.g. if an IDP chooses to obtain identity proof from a partner that is not certified, but the IDP is willing to take on sufficient accountability to compensate for this process – should the Trust Framework Provider permit the IDP to be certified?

Or does this all just mean that it’s up to that poor Relying Party to seek out enough attribute sources to establish identity information sufficient to make an access control decision?

Evolution of a Trusted Identity Model

In the Fall of 2012, I led the development and publication of a discussion paper for the Kantara Initiative Identity Assurance Working Group. The paper explored the concept of a general model for the Credential Provider – Identity Provider – Online Service Provider architecture.

I identified several abstract Roles, Actors, Functions and Relationships that appeared in many of the identity frameworks and architectures, then proceeded to create a general model showing how everything was related.

General Model of Credential-Identity Manager Role Separation

The paper received some interest when presented to a Kantara Initiative workshop in late 2012. It has also sparked renewed interest and expansion from Anil John, as he explores the concept and implications of Credential Manager and Identity Manager separation, in particular as it relates to US FICAM Trust Frameworks and NIST 800-63-1/2.

In several posts, Anil has refined the original general model in interesting ways, and now uses it to test ideas on anonymity, privacy, separation of roles and other topics.

The refined model looks like:


The names of the “Manager” roles have been changed to align to NIST 800-63 language (I was using terms typically used in Canada for these roles), and an explicit manager for the Token-Identity Link Record has been created (this becomes the ‘Credential Manager’ in future revisions.

These models have proven to be very effective in communicating to our peers about this topic. But there is unfinished business.

One of the original goals of the Kantara paper was to map out the Functions that could be assigned to Roles, and to dig deeper into the relationships between the Roles. The time has come to start this work.

It seems that several big topics are being debated in the Trusted Identity field these days:

  • Attribute providers are the real ‘information emitters’ – there’s no such thing as ‘Identity Providers’ – so how should Attribute providers be evaluated versus Level of Assurance schemes?
  • How can a Trust Framework Provider describe assessment criteria, then have an assessment performed reliably, when Attribute Providers and “Identity Oracles” proliferate?
  • What are the implications of aggregating attributes from several providers as it relates to assurance and confidence in accuracy and validity of the information?
  • How could subsets of the Roles and Functions in the model be aggregated dynamically (or even statically) to satisfy the identity assurance risk requirements of an RP?

A small team has been assembled under the auspices of the Kantara IAWG, and I hope to lead us through the conversation and debate as we extend the general model and explore it in light of temporal events, LOA mappings, Trust Framework comparability mappings, and core Trusted Identity Functions. Once we have figured out a richer general model, the meatier discussion can begin: which elements, agglomerations, and segments of the model can be assessed and issued a Trusted Identity Trustmark in a meaningful way – one that is valued by service providers and consumers.

The Trustmark question cannot be addressed until the general model is elaborated – and the outcome will be a greater community understanding of the nature of attribute managers, LOA and how attributes could be trusted in the to-be-invented identity attributes universe.

More to come…