Kantara Initiative Work Groups on Data Sharing and Consent

Updated November 11, 2018

Thoughts about user data agreements on the internet

As I have been talking with more people and companies about the concepts of ‘consent receipt’ and how Kantara Initiative has developed it, I have been looking for a better framework within which to grow the concept and plan out future enhancements. This article describes what I have discovered and sets out some ideas for forward planning.

Kantara Initiative Work Groups on Data Sharing and Consent

KantaraInitiative.org is the global consortium improving trustworthy use of identity and personal data through innovation, standardization and good practice. To this end, we have two Work Groups established to treat information sharing, and consent management topics.

The Consent & Information Sharing WG (CIS WG) is one of the original work groups that has existed continuously since Kantara began 10 years ago. It is the home of work like the Standard Data Label, User Submitted Terms, the Kantara Consent Receipt Specification v1.1, and other projects and research related to information sharing from the person’s point of view.

The Consent Management Solutions WG (CMS WG) was started in 2017/2018 to create a library of consent management practices related to agreements to process personal data.

The two work groups are addressing different aspects of the same topic: how to legitimately empower the individual to make decisions about what personal data they wish to provide to organizations, and to give organizations tools to assist the individual in making these choices.

Planning for Future Work Topics

In order to plan for future work in the work groups, I propose to take a step back and look at the broader context of the work at Kantara. The CIS WG has been entirely focused on developing and publishing the Consent Receipt Specification for the last couple of years. Now we need to see if the receipt concept remains fit for purpose, and what adjustments are needed.

Finding the Right ‘Scaffold’ to Examine the Work Plans

Broadly, the two Kantara work groups are dealing with how individuals and organizations should act when the individual agrees to give data to the organization, or when the organization gets data about an individual. To limit the scope of discussion, the CIS WG decided to focus on the lawful basis of ‘data subject consent’ as described in GDPR and similar statutes.

But really, we are examining the agreement between data subject and data controller[1].

When a data controller offers services or products to a data subject, they typically specify terms of service and the consideration required in exchange for that service or product. The consideration might be financial, or it might be collection of the data subject’s personal data [NOTE: The work group pointed out to me this week that this characterization of personal data as a form of currency is incorrect. The data provided must be necessary for provision of the service – it is not to be considered as a trade-off]. The data subject is prompted to accept the terms, or to acknowledge the acceptance by continuing to use the service or product.

Thinking about this more deeply, I realized that we might be able to borrow concepts from contract law in common law to provide ‘scaffolding’ upon which to examine our work. This is obviously not a novel idea, but it has taken me a while to realize it.

Note that while I describe concepts and use terminology from contract law, I don’t believe that all data collection and processing agreements are necessarily contracts. [NOTE: AND the ‘valuable consideration’ described below is not ‘personal data’]

Basic Concepts in Contract Law

A contract is an agreement giving rise to obligations which are enforced or recognized by law[2].

There are three main activities required to enter into a contract: an agreement (a ‘meeting of minds’ consisting of an offer and acceptance); an intention to create a legally binding agreement; and consideration (‘something of value’ which is given for a promise and is required in order to make the promise enforceable as a contract) in both directions.

Now, relate this to the paragraph in the previous section:

When a data controller offers services or products to a data subject, they typically specify terms of service and the consideration required in exchange for that service or product. The consideration might be financial. or it might be collection of the data subject’s personal data. The data subject is prompted to accept the terms, or to acknowledge the acceptance by continuing to use the service or product.

I note with interest that the GDPR Articles and Recitals also use similar terms when describing the interaction.

Now, let’s examine the work plan in light of this idea of using contract concepts.

The “Data Collection and Processing Agreement” Concept

Consider the diagram which is a representation of the same text in the previous section:

Agreement-concept

We can now ask questions about each ‘terminal leaf’ phrase:

  • Is it clear to all parties that they are actually or effectively entering into a legal contract, even though the user experience may not look like a legal contract (there is no signature ceremony)?
  • Is this arrangement fair and reasonable?
  • Is there a power imbalance between the parties?
  • Are the rights and obligations clear to the parties?
  • Are the implications of the agreement and the consideration communicated clearly?
  • Do both parties have the same opportunities for record-keeping?
  • How are updates and changes managed?
  • Is broad interoperability at this segment desired?
  • Does the party have sufficient information to exercise their rights at a later time, or to change their mind?

We can also position work group publications and deliverables as tools or remedies that modify the answers to those questions.

For example, the Kantara consent receipt concept and specification address, among others, the ‘same opportunities for record-keeping’ and ‘sufficient information to exercise rights’ questions.

This approach should give the WG participants a way to identify areas needing work, and to articulate the rationale for doing that work.

Requirements Arising from Regulations

Note that the previous section does not explicitly call out the lawful basis of ‘consent’.

In order to apply this analysis correctly, we must include analysis of regulatory requirements and the obligations that regulations place on each party in the agreement.

Consider the following diagram which (loosely) describes GDPR-sourced requirements of the data controller:

GDPR-Requirements

These requirements stipulate what must be communicated in the ‘offer’ stage of the agreement. It also stipulates some required elements in the terms of the agreement. GDPR also hints at what ‘acceptance’ should look like (particularly in the situation of ‘consent’).

Next Steps

Join us! If this article interests you, and you want to help us influence the future of interactions between you and internet companies, these Kantara work groups are the place to do it! There is not cost but you are required to complete a Group Participation Agreement related to intellectual property rights.

The Consent & Information Sharing work group is currently deciding what the next pieces of work should be. I hope to convince everyone that this analysis approach will yield good results.


Notes:

[1] I will use GDPR terminology in this article for consistency – the argument applies to most privacy and data protection legislation and regulatory environments.

[2]The material about contract law is derived from “At a Glance Guide to Basic Principles of English Contract Law”, Advocates for International Development, undated. Accessed October 2018. http://www.a4id.org/wp-content/uploads/2016/10/A4ID-english-contract-law-at-a-glance.pdf

 

Advertisements

Where do standards come from and how are they used for online services?

Products or services are designed to meet a need. In most cases, many different products or services are designed to meet the same need or purpose. How can a supplier or consumer determine if a product or service meets the need? And how to compare the quality of different but similar products or services?

In a word: standardization!

In a few more words: Standards specify the requirements that must be met by a product or service to be considered as meeting the need.

Standards of practice and technical standards are typically developed by trade associations, often due to regulatory requirements or the need for many different products or services to work together or inter-operate. Standards reflect the agreed-upon requirements.

For example, did you realize that brake lights on cars use exactly the same red color? There’s a standard for that: SAE J 578. It specifies the requirements for “red” based on visibility, driver reaction, confusion with other colors used in car lights and other factors.

Products or services are assessed for conformity to the standard by using conformity assessment criteria. These criteria are written in a way that allows an assessor to determine if the product meets or does not meet the requirements described in the standard. Assessments are repeated from time to time to ensure the product or service continues to conform to the standard. You can imagine that the criteria and requirements could be very detailed and precise or looser depending on the intent of the standards creators.

Standards exist for a very broad range of products and services. In the simplest terms, standards are created whenever there is a need for a level of quality, consistency or safety. Standards can be developed by a single producer, an association, a country or amongst many countries. In fact, international standards underpin many aspects of international trade.

In the world of trusted use of personal data and identity, conformity to standards plays an important role. Standards exist for the secure communications protocols, methods for use of encryption, strength of authenticators and mechanisms for servers to identify each other, among many others. The infrastructure of authentication, authorization and identity technologies relies on standards for anything to work together.

So, if you’ve ever wondered how products and services from different producers can work together without major hassles, they have probably been assessed for conformity to standards. Think about how many apps, web sites, systems and organizations communicate on the internet – it’s all based on standards.

Identification as the Objective for Designers

Why do system designers insist on bugging us to logon every time we want to do anything?
In this blog, I’d like to pivot on the “IDAM/ICAM Industry” core premise that establishing the Identity of the person/non-person and Authentication are the objectives of most online systems. This is, perhaps, a technology/product-centric way of thinking.
I propose to you that we reset our perceptions of that ‘who are you’ event: it’s all about Identification; authentication is one means to that outcome.
Identification: determination that the User or Subject of the online interaction is the singular entity within the in-scope population. Call it needle and haystack if you like.
Most of us experience authentication events in the form of logons. They are disruptive, and probably unnecessary in most cases. Take a look at the work of AccountChooser.com for more background on this, and if you can attend Pam Dingle’s presentations about it.
Wouldn’t it be great if architects and designers attempted to identify the user without the traditional logon flows? Given that device possession these days gets us pretty close to individual identification; and given the opportunity to step-up with authentication for transactions with higher impact, do you, the reader have reasons to insist on user action on return visits?
When I get the expected incredulous looks, I simply ask people the last time they actually had to ‘logon’ to a Google account. Usually, the answer is ‘when I replaced my phone/ipad/computer’.
So what does Google know about you and your devices that is apparently enough to pass the first gate?
Please note that this blog is not about authentication federation vs ‘in-house’ authentication. It’s about stating the true requirement for the user interaction: the requirement to Identify in order to act on policies for entitlement, authorization and access controls.
Maybe if design thinking could be shifted, we’d see some cool innovations that don’t involve yet another credential issuance.
What do you think?
P.S. please don’t use the ‘finding the browser cookie is an authentication activity’ argument – I choose to split the discussion based on user interaction for better or worse.

Account Recovery should be the main authentication flow!

I’ve been thinking about “what’s next” in the world of wide-scale consumer authentication systems.
We hear a loud proclamations for death to passwords, when what we all really want is just death to intrusive, poorly designed, stop-gap logon systems.
So how should “better” be defined in this space?
Here’s one proposal that might change our expectations for the “who are you” experience: Treat every user authentication and identification event as an account recovery.
We often see asymmetric authentication designs: the front/main channel is smooth-ish and a well designed experience; careful design is used to avoid antagonizing the user; we get long periods between logons that are protected (we assume) by aggressive anti-fraud and anti-hacker detection and response systems in the background.
But when it comes to account recovery, we as users get the full range of inconvenient, painful, probably less secure options: secret questions; one-time links by email; SMS clues; code sheets and so on.
Why can’t the authentication providers give us a few strong multi-factor, physical authentication credentials (devices, tokens or cards) when they enrol us?
Then design the system to use those authentication credentials for any logon event – no ‘weaker’ flows that subvert the strength of the primary flow.
The consumer would be instructed to lock one of those physical credentials in a safe or a bank safe deposit box – to be retrieved in case of emergency to re-establish connection to their online accounts. But the flow would be basically identical to the main flow – so no user re-training required.
Given the anecdotes of high account recovery costs, wouldn’t it be cost effective to establish such a system?
Too simple? Devil’s in the details I suppose.
And the capabilities gap between professional authentication providers and closed, proprietary security systems.

Why is online identity such a hot subject? Thoughts on current trends.

Interesting trends are emerging in the Online Identity circles I travel. One trend is the shift away from formalized, enterprise, centrally-regulated Federations towards ad-hoc, consumption-driven, transaction-oriented architectures and policies.
On one hand, I see advancement in ‘formal Federation’, which I will loosely define as a pre-evaluated structuring of centrally defined requirements and criteria, realized as a collection of entities who share common policy, technology and security domain boundaries. It’s the ‘classic’ use case where one enterprise accepts a business partner enterprise’s user authentication.
On the other hand, there is a large thought-community emerging where the central concern is the ‘transaction’: the interaction between online service provider and online service recipient. The transaction-oriented requirements dictate that risk, qualifications and identification should be evaluated at ‘run time’ rather than at ‘design time’. Themes include Attributed Based Access Control, User-managed access, electronic consent notices, step-up authentication and other ‘assertion’ evaluation schemes.
This is a ‘frothy’ situation – there are no pervasive solutions and everyone is attempting to locate the critical problem space which needs fixing. Arguments and philosophical approaches erupt frequently – which is stimulating more energy and effort. A good thing.
One challenge that ‘pure’ Federation cannot address is the ad-hoc nature of online interactions. Predicting which identification authorities and policy enforcers are needed in advance is a wicked problem and is likely to be intractable. However, establishing common protocols and information sharing processes could possibly allow transaction participants to discover enough about their opposite number to do the risk/cost/benefit analysis that us purist security/identity practitioners claim that they always do. (of course nobody actually does this – they do stuff based on past experience and retain the right to investigate suspicious activity.)
OK, so what’s my point?
Simply the axiom that centralization is good for creating the nucleus of common practices required to build critical mass; but the scale and rapid expansion seen in internet-scale technologies and approaches can only occur when the common practices are subsumed into standards and the innovators are thus freed to do unexpected things upon a solid, standardized foundation.
A somewhat strange post, but I think the ‘identity’ industry is currently in the throes of transitioning from ‘central’ to ‘standardized’ even before fully developing what those common practices must be: and that’s why it is such an interesting space to be engaged.

Who Sez? An Alternative ID Ecosystem Viewpoint

A debate continues to simmer in IDESG about the nature of an Identity Ecosystem; there are many points of view and opinions. For an organization like IDESG, a unified concept must emerge.

Approaches to define the Identity Ecosystem include: setting a baseline of requirements that must be met by ecosystem participants; definition of trust marks; Internet nutrition labels; establishing a formal certification process; creating the ability for ecosystem participants to advertise their security, privacy and other capabilities; functional models; interaction models; protection levels; assurance levels; self-selection; independent attestation; and the list goes on.

Let me suggest and explore another way to perceive and describe an ID Ecosystem aligned with the NSTIC Guiding Principles.

A common thread that links the above-listed approaches together is that each one declares one or more Authorities that determine if the prospective ecosystem participant’s policies, technologies and operations are compatible with the NSTIC Vision.

Following this line of thought, envision a governance structure that seeks to influence the behaviour of the Authorities that grant rights of access and recognition to ecosystem participants. Rather than IDESG attempting to define and quantify every action, implementation or intent in the entire ecosystem, why not define the rules of being an Authority and certify those entities wishing to be Authorities?

What would an Authorities-driven Identity Ecosystem look like?
(and, remember that what follows is a rosy view biased to my ways of conceptualizing the ecosystem)

  • Authorities exist that can certify or recognize ecosystem participants that adhere to that Authority’s rules
    • Authorities create and manage rules that are compatible with NSTIC Principles
  • Authorities become authoritative when a) they demonstrate rule compatibility to an IDESG Authority and b) entities decide to follow their rules and ‘join’
  • Each Authority would decide how to certify or recognize their participants – some might use formal Trust Framework Provider methods, others might allow self-declaration: the choices would be defined in their rules
  • Each ecosystem participant, including the Authorities, is accountable to the Authorities whose rules they follow

Some issues remain:

  • Who sez that the IDESG Authority has any sway over any other entity?

Is this description simply a description of the current internet using different words?

I think it is a new thing because it focuses on approaches to make the IDESG organization and members credible and influential in the internet space.

In order for an IDESG Authority to have any value, a critical mass of significant influential entities must join and put their weight behind the organization. If a critical mass of interesting organizations choose to acknowledge this web of authorities and ways to align to the NSTIC Principles, we begin the positive cycle.

It’s a big ‘IF’. And it’s one of the key assumptions under which IDESG, Inc. was formed.

My opinion: focus the work of IDESG on making it easy for organizations and individuals to demonstrate support for NSTIC. NSTIC’s vision is a good one. Build on existing goodwill to build a source of co-recognition. Reinforce the positive actions that IDESG members take: by getting certifications; by doing regular assessments; by actively protecting privacy and security. IDESG should use inclusive selection methods to increase membership and to create a self-enforcing or peer-enforcing environment:

  • Delegate and distribute ‘authority’ to the communities, federations and other self-identifying groups
  • Avoid over-specification, bureaucracy, self-limiting approaches, command-and-control centralization
  • Educate participants, develop and give them the tools to make informed decisions about the agreements and interactions they wish to use. Make it easier for others to educate about good practices and dangers. Make the safest options easiest.
  • Minimize barriers to entry
  • Reinforce positive behaviours
  • Build goodwill and peer value