The concept of “Privacy by Design” is an increasingly prominent topic in the privacy world. It’s not a new concept, however – it was first articulated by then-Ontario Information and Privacy Commissioner Ann Cavoukian, and formalized in 1995 by the Ontario Information and Privacy Commission, the Dutch Data Protection Authority and the Netherlands Organization for Applied Scientific Research. Privacy by design was adopted as a framework by the International Assembly of Privacy Commissioners and Data Protection Authorities in 2010.

What is Privacy by Design?

At its core, it’s the notion that privacy values – and with them, the associated legal requirements, limitations and controls, ethical standards, principles and similar authority – all be embedded in systems and processes that collect, use, process and store personal information from the very beginnings of the conceptualization and design process. It’s a species of something called Value-Sensitive Design – the concept that technology should be designed in a way that accounts for human values in a principled and comprehensive manner, in which ethical values of both direct and indirect stakeholders are built into the design of a system. It has some core foundational principles that strive to drive its outcomes:

  • Proactive, not reactive; preventive, not remedial
  • Privacy embedded into design
  • Privacy as the default setting
  • End-to-end security – full lifecycle protection
  • Full functionality – positive-sum, not zero-sum
  • Visibility and transparency – keep it open
  • Respect for user privacy – keep it user-centric

The bottom line of all of this is that privacy by design is all about preventing issues and concerns rather than remediating them after they happen, and ensuring that whatever representations are made to stakeholders can be and will be fulfilled. So fundamentally, it’s a process and technology design mandate.

Privacy by Design is evolving from a recommendation to a mandate

All of this sounds simple enough, and not too many people would argue with either the principles themselves or the goals and philosophy behind them. Implementation, however, is another matter. Putting the principles into action and achieving true privacy by design is a complex and challenging matter that requires extensive and thoughtful planning. Consider first, the law. As a threshold matter, Privacy by Design is itself a mandate that is increasingly written into law.  The EU General Data Protection Regulation (GDPR), among other laws, expressly incorporates privacy by design into its requirements at Article 25 (“Data protection by design and by default”). Regulatory bodies in a number of countries, including the United States, explicitly recommend it as good practice in their guidance; and as we all know, such a recommendation by a regulator who has the power and discretion to sanction you for noncompliance is pretty close to a legal requirement, and to be ignored at your peril. So you can’t just ignore Privacy by Design – if it doesn’t apply to you now, it surely will apply at some point down the road.

Then there are all those hundreds or thousands of other privacy laws that apply to the concept as well.  Privacy by design is, among many other things, about complying both the letter and the spirit of those laws. In and of itself, that reality poses a major design challenge. Those laws are by no means uniform – they often conflict, they are often vague, they are often outdated.  Nor are they static – laws change, and come and go, on a regular basis. And on top of the laws comes an entirely separate set of ethics considerations that must be worked out on a case-by-case basis – “In the absence of a legal requirements, what’s an ethical period for retention of some bit PII in country X”? “To whom may we appropriately give access to it?” Etc., etc., etc.

In a major system there are potentially hundreds or thousands of such soft decisions that must be worked out.  And looking to laws like GDPR or the Privacy by Design Principles offers no help. They’re very high-level and vague, nearly to the point of being meaningless, unless and until you do some serious thinking and work though their meaning and application to a particular data type in a particular environment for a particular use case.

Putting Privacy by Design into action

That, in a nutshell, is the conundrum and mandate of Privacy by Design. In order to design and implement a compliant system, you must first analyze and determine what these abstract mandates actually mean in the specific environment you propose to build. That means not only interpreting the high-level principles and requirements themselves, but also fully understanding how they interact with all the laws that bear on the topic. It also means fully understanding your own internal requirements and processes, including exception management for things like legal holds. That, in turn, means things like policies and procedures, records retention schedules and many other things must be fully developed as well. Remember, you’re building an engine to execute some potentially complex and high-risk rules. If you don’t understand your own rules from the start, you cannot possibly build a system that accurately executes on them. The precise challenge is to take the vague and high-level concepts and:

Quite a challenge, that. Consider one of the Privacy by Design principles: “Full functionality – positive-sum, not zero-sum.”  The explanation of this is “Privacy by design seeks to accommodate all legitimate interests and objectives in a positive-sum ‘win-win’ manner, not through a dated, zero-sum approach, where unnecessary trade-offs are made. Privacy by design avoids the pretense of false dichotomies, such as privacy versus security, demonstrating that it is possible to have both.”

What on earth can that mean in terms of designing a big IT system? What’s the flow chart and set of machine instructions that will make your system compliant with this concept? Take this to your systems engineers and ask them to build it for you. See what they say.

Achieving “positive-sum” solutions

Design it wrong, and some data privacy commissioner will tell you got it wrong and maybe fine you, but no one is going to lay out in advance precisely what the order of operations should shape the machine you’re building. And the reason no one will tell you is because no one really knows.

The people who write these regulations and standards aren’t in the business of designing big IT systems, so they don’t have to know. That’s all for you to figure out, recognizing that after the fact you can always be second-guessed.

So where do you begin? First, avoid one of the most common pitfalls in the information management business: buying or building a system before you have fully analyzed your requirements. Remember, the system executes rules, in this case some complex, detailed and very high-risk rules. If you don’t know what the rules are, you certainly can’t build a system that executes them.

That means, long before you write a line of code or diagram out a flow chart, you need to carefully analyze:

  • Applicable law
  • Soft ethical considerations
  • Business needs
  • Internal policies such as RIM policies, data security policies and so on
  • Probably a lot of other things

All of this analysis should be wound up into very detailed functional specification of exactly what the system needs to do. Only then can you begin the design process, always keeping an eye on that specification, and bearing in mind that if you get to the point of needing to make a tradeoff, the tradeoff cannot come at the expense of privacy compliance, even if other functionality is impaired thereby. Privacy by Design expressly precludes that sort of tradeoff.

So there you have it – a quick but complete recipe for building a system using privacy by design concepts.  All you need to do now is go build it.

For more on Privacy by Design, check out this recent webinar recording where we discuss this subject in more depth: Privacy by Design – A Paradigm for Process and Technology