Privacy Impact Assessments

Common pitfalls of PIAs (and how to avoid them)

Jared Maslin

9 min read

They go by many names: Privacy Impact Assessments (PIAs), Data Protection Impact Assessments (DPIAs), Data Privacy Assessments (DPAs). While each has its own respective nuance (e.g., regulatory source, regional terminology, depth and breadth of assessment), impact assessments of any kind tend to suffer from similar pitfalls in planning, execution, and follow-up. Whether your organization is new to the impact assessment game as you traverse into privacy and AI or you’re a seasoned pro seeking to refine your existing processes, we will explore some of the most common mistakes we see in impact assessments performed in the wild today and share recommendations for how you can avoid them.

What are impact assessments and why are they important?

Before we dive in, let’s establish what we mean when we say “impact assessment”. In the privacy space, impact assessments refer to review procedures undertaken to consider any potential impact of new or existing data processing practices on the rights of data subjects and the protection of their personal data. Under the General Data Protection Regulation (GDPR), impact assessments are required when any practice is likely to involve a “high risk” to rights and protection of applicable personal information. Common examples include the tracking of individuals’ behavior or location, processing sensitive information (e.g., race, ethnicity, religious affiliation, political opinions), or when applying new technologies and use cases to personal information. Like the EU, several U.S. states (California, Colorado, Connecticut, Indiana, Oregon, Tennessee, Texas, and Virginia) also include requirements for impact assessments when specific thresholds or criteria are met. Whether you’re a domestic business or one with a global footprint, impact assessments are quickly becoming ubiquitous in privacy programs.

Ultimately, the intention of these assessments is to proactively identify and mitigate (or document acceptance) risks associated with personal data processing and supporting technologies. Prior to regulation forcing the issue with impact assessments, it was incredibly common for businesses to hope that nothing would go wrong, only to later invest in assessments after something ultimately did. For some, impact assessments were done as a root cause analysis after a personal data breach; for others, assessments were done when trying to understand why a platform had been collecting and processing information of a specific type (e.g., when brought to their attention by a data subject or third party).

Regardless of the cause, the impact assessment process was complex, costly, labor intensive, and darn near impossible. In many cases, it was a futile attempt to get into the mind of the original designers or builders of a tool or data practice that may not even be with the business any longer. What risks did they consider? What risks were accepted? Did they integrate any controls in an attempt to prevent risks from coming to fruition? All critical questions, and all incredibly difficult to answer in retrospect. However, by embedding a proactive impact assessment framework in your organization, these questions can be addressed (and maintained), providing a critical element to your long-term incident response, reporting processes, and regulatory obligations.

What can go wrong?

Working perfectly, impact assessments allow organizations to bring cross-functional perspectives to designs and plans for deployment in processing personal information. They document critical decisions and can even be conducted for a variety of data subject types like consumers, patients, employees, job applicants, and business partners. Furthermore, impact assessments are becoming an effective tool in managing AI-related risk, as well. All it takes is a pivot of contextual perspective and regulatory applicability. However, there are several pitfalls that we see in organizations performing impact assessments today that can reduce the overall value and increase frustration associated with impact assessments. Let’s consider a few!

1. Misalignment between scoping and stakeholder exclusion

Impact assessments can be performed at almost any level: Enterprise, platform, product/service, department, or even specific functionality being added to a single product, service, or process. The level of granularity you choose is likely to be a function of your organization’s size, maturity in data processing, and maturity in enterprise risk management. This decision is highly impactful to the approach of the impact assessment and to how it should be planned and executed.

One common issue we see is that only a few specific teams or departments are brought into the discussion early to identify potential for impact to processes that are either upstream or downstream from the immediate target functionality. For example, if an organization seeks to assess the potential impact of offering personalized advertising to minors, there may be a tendency to focus on the acts of collecting and using that information, but not necessarily to the impacts of that decision on the user experience, front-end design and language presented to users for notice or consent, and downstream executive reporting and compliance obligations. This isn't necessarily through negligence, but often from an attempt to meet deadlines and to limit burden to stakeholders across the organization. The outcome, though, is the same: an incomplete view of the risk environment and, as a result, an incomplete consideration of the risks inherent in the new, proposed practice.

Our solution: Rather than assessing a new practice in isolation, first consider the data lifecycle for the information involved from the moment you seek to collect the information through the moment it no longer exists on your platform. Then, add in the context of the user journey and overall user experience to consider what a typical consumer or data subject might expect regarding the new data practice. Would a user expect to see a new pop-up, be redirected to an updated policy, or be provided with some transparency and justification for the new practice? By identifying these oft-tangential, contextual considerations to processes that feel inherently data-related, you can more effectively identify supporting stakeholders in your organization that are critical to conducting a complete and comprehensive impact assessment.

2. Thinking short-term

If there’s one certainty in the data privacy space, it’s that change is inevitable. What was once required for a small subset of your data subjects quickly snowballs into something far more pervasive. For years, we heard things like “we aren’t operating in the EU, so we don’t need to worry about that” and “we can do something special for California and then something else for everyone else”. Suffice to say, those approaches haven’t exactly aged well. New regulations and new requirements practically come like clockwork, so designing for today can be a costly approach in the long-term, which brings us to a common mistake in impact assessments: treating assessments as a point-in-time review rather than a forward-looking exercise.

Introducing a new data practice or technology involving personal information, especially to the designers and engineers tasked with implementing changes, can feel like a specific, focused endeavor. It’s an understandable perspective, too, being that teams are given specific requirements and acceptance criteria for achieving an outcome or releasing a new capability. However, failure to see how that new practice fits into the bigger picture of your organization and its user base can lead to omitting key risk considerations. Furthermore, it can lead to assessing a new practice through the lens of existing regulation and existing user expectations. When this happens, we see assessments that focus on the here and now, rather than how this new practice could position the organization to address changes that may arise in the coming days, weeks, and months. Similar to those claims that “we’ll just do one thing for California and something different for everyone else”, redesigning your platform and business processes to multiply with each new law becomes a costly, unwieldy, and complex decision that may have seemed appropriate at the time, but quickly got out of its depth.

Our solution: When constructing your impact assessments, bake in considerations of existing dependencies, the evolving regulatory landscape, and the shifting expectations of users from different backgrounds and locales. When you design risk management solutions for flexibility, you may add time and effort upfront, but you can save an immense amount of long-term time, expense, and frustration while still meeting what you need for today in the process.

3. Confusion over terminology

Data privacy can often feel like an ephemeral space without any clear or consistent definition from one matter to the next. Looking at regulations, foundational concepts like “personal information”, “personally identifiable information”, and “sensitive information” can have different meanings and, as a result, impact organizations differently over time. While we can’t necessarily force regulators to align terminology with one another, we can drive consistency in how your organization defines terms and how they are applied in privacy risk management. When this isn’t done, we see impact assessments completed with different sets of requirements and assumptions being made about what various terms mean during the process.

For example, one tactic used today to filter out high-risk practices that might require an impact assessment is requiring teams to complete of a short, internal questionnaire to describe what is being proposed and to highlight any generic risks that call for a closer look. While useful at scale, the language used in these questionnaires can make or break the value and utility of the approach. Consider the question, “will your solution involve PII about minors?” In the absence of any additional detail, the question itself calls on the individual filling it out (who is not likely to be a privacy expert) to apply the same definition of “PII” and who is or is not a ”minor”, in the context of privacy regulation. If the user either misunderstands either term, or has a different interpretation of what they mean, the responses garnered from the question can lead to practices being misclassified, resulting in impact assessments either not being performed when they should or being performed when they aren’t necessary.

Our solution: The simple solution is to avoid the use of industry-specific terminology or language that can differ based on changes in scope or geographic footprint. However, this isn’t always feasible when the topic itself can be so nuanced and contextual. So, where the use of terminology or specific phrases is necessary, always provide definitions and examples to increase the likelihood that responses match the intention of the recipient using those responses for assessment criteria.

4. Insufficient communication

Let’s be real for a moment: Data privacy and regulatory compliance aren’t everyone’s bread and butter. While us practitioners may love and live in the space, some stakeholders prefer to hear as little as possible about the topic. As a result, we often see processes where impact assessments and their results are only communicated (and made readily available) to a small subset of the larger group of potentially impacted stakeholders. While often done out of care for someone’s time and focus, this can become a critical error when it prevents an impacted party (even if only a marginal impact) from learning about planned changes and resulting obligations upstream or downstream.

For example, consider an organization that plans to implement automated recording and transcription of all virtual meetings. The team does well to include leaders from various operating locations (to consider the need for curated notice or consent practices based on change in jurisdiction or cultural expectations of employees) and departmental heads (to consider impact to day-to-day operations). However, with focus on the employee base, they may not think to include the marketing and community engagement teams, which host webinars and virtual events that are open to the public or include external business partners on the same meeting platform. If those stakeholders are not made aware of impact assessment results, they may not be aware of new obligations that extend not only to employees, but also to customers, business partners, and other non-employee attendees of virtual events where recording is to take place by default.

Our solution: Consider how and where you will store impact assessment results to highlight risk-related decision making and stakeholder actions required to support those decisions. It doesn’t have to be a firehose of updates and information, but the more available and open you make your assessment outcomes to stakeholders, the more likely you will be to cover all of your proverbial bases and identify gaps in the process, as well. Furthermore, for any impact assessment, prepare a quick elevator pitch to summarize the risks identified and impacts to the business as a result. While some stakeholders are interested in the details, others simply don’t have the time and want bitesize summaries to quickly and easily inform them of potential impacts and outcomes. Just like in product design, consider the needs and perspectives of your audience (whether internal or external) in deploying impact assessment processes in your organization.

5. Lack of monitoring and follow-up

It’s often said that the most important part of a punch is the follow-through. We argue that the same is true for impact assessments. As standalone exercises, impact assessments can address the letter of the law in many ways, but that approach can often lead to a situation in which countless risks are identified, but very few are actually mitigated.

For example, consider that same example used previously of an organization and their assessment of introducing personal information about minors into their platform. Even if we assume a faultless intake, review, and reporting process for assessing privacy impacts of that practice, doing so only gets us so far. Until the controls and risk mitigation steps are implemented, tested, and documented by those impacted, the assessment serves little material value to the business beyond a compliance checklist. Furthermore, if the organization fails to revisit the assessment and associated decisions/outcomes on a recurring basis, then changes in context, user expectation, and regulation that can materially impact outcomes may be missed over time.

The results of an assessment may change if conducted again in a year or when a new law arises, so it’s critical that assessments be considered living and breathing documents that require attention and reconsideration over time.

Our solution: When designing your workflow for disseminating impact assessment results, include periodic testing of controls implemented as a result of assessment outcomes to ensure that the risks identified are truly being mitigated as expected downstream. Also,define a recurring process to revisit assessment conclusions over time to ensure continued awareness and accuracy of assessment outcomes. In many cases, the results of an assessment may change if conducted again with new information, so it’s critical that assessments be considered living documents.

With all of the above, while impact assessments can feel heavy and labor-intensive, they don’t have to be. Thoughtful design considerations and active management of impact assessment procedures at the organizational level can lead to a tenable workflow that will not only address your organization’s compliance obligations but also provide material risk mitigation that is sustainable long-term.

Need help in designing impact assessment workflows, or interested in benchmarking your current practices against your peers? We can help! Contact Cadence Privacy Consulting today!