Essentially, Data Is Good. It’s the Use Cases That Can Be Problematic

Essentially,
Data Is Good.
It’s the Use
Cases That
Can Be
Problematic

Essentially, Data

Michael DePalma shares during Avenga’s webinar

In this modern age we all have to adapt to a changing landscape in order to function in the paradigm of a data-driven society. We, at Avenga, are committed to empowering everyone with the opportunity to share unique perspectives through a series of webinars specifically designed to respond to the global digital developments in pharma, pharmatech, biopharma, healthtech and life sciences.

Today, we are highlighting data as a property and data ownership, which is the topic we addressed during our webinar on Reframing Data and Privacy: The new ethics, optics and relationships. Our special guest, Michael DePalma is a renowned expert in the technology, healthcare and life sciences space and a devoted advocate for the utilization of meaningful technologies in human health. He shared a number of points I’d like to pass on to you. 

Data protection vs data privacy vs data ethics

Let’s differentiate between data protection, data privacy and data ethics. Data protection is how we treat the data we already possess. It does not address the data we do not yet have, nor does it address how we acquire data, what data we acquire, the consent status of that data, or how we use that data.

Data privacy, broadly written, is what practices we use to ensure that we can protect the person or persons to whom the data refers. We’ll get back to that issue later as it’s a very broad topic, as well as a part of this discussion, which is what even does data privacy mean today.

Finally, data ethics is a broad framework that applies to how we make decisions related to data. Privacy and security are components of data ethics, but in addition, we need to understand the accuracy (veracity) of the data, it’s quality, sample sizes, how it was collected, and what, if any data may be missing, etc. So we’re establishing a set of principles and standards for data science and analytics.

→ Explore data services for business by Avenga

Data ethics in the “Privacy by Design” approach

Data ethics is one of those things that we all just assume exists. We all assume these things are common sense. But, the thing about common sense is that it isn’t always so common, in particular to when it comes to data. For decades, our default setting in technology has been tocollect everything and figure out how to use it later. And if you think about digital health or digital therapeutics, and even if you think about systems, websites, and apps, the default setting for them is that we’ll collect all the information about users and store it.

So,  assuming that you have all that data, data ethics is a framework within which we can make decisions about this data. There are always assumptions about data:  that we have enough data, that it’s accurate data, and it’s correct, and so on. One of those that Michael was interested in is if we are supposed to have all that data in the first place, which is a huge issue if you start to unpack it.

A lot of organizations have adopted a Privacy by Design approach, which is a framework that makes decisions BEFORE there is potentially trouble. We all, as stewards of the data, have literal fiduciary responsibilities regarding this data. We’re on the hook for its protection and use. It’s a very valuable thing. Also, it’s a dangerous thing in the wrong hands, and an incredibly positive powerful thing in the right hands.

Essentially, data is good. It’s the use cases around data that can be problematic. It’s not a data protection issue, it’s a business model. It’s like having carte blanche with the data and to do whatever we want with it . So what are the things you’re going to do if you think about a specific framework with this?

→ Read more Why Consider Enterprise CRM for Your Business

Ann Cavoukian is one of the people Michael looks to for these ‘privacy’ approaches. Ann is the former Information and Privacy Commissioner for Ontario, Canada.  She’s developed a 7 Step Privacy By Design framework that he thinks is really helpful. Again, it seems like common sense, but you’d be surprised. And Michael usually adds an 8th point.

  1. Be Proactive, not reactive. Don’t remediate – anticipate. The idea behind this is to prevent problems before they arise. Every Risk manager in every Pharma company knows this and is always thinking about this. So from a compliance perspective, how does it all filter through to our IT infrastructure to architecture and approach.
  2. Privacy is the defaultall personal info is to be protected. No action is to be required by an individual. “Do nothing” is the default. This is where ‘opt in/opt out’ becomes a huge part of a user’s experience. And it’s something people are much more aware of now.
  3. Embed privacy into the fundamental design. In other words, build it into all business processes as a foundation. Don’t bolt it on. We don’t want to incur privacy debt when we can incur tech debt. We don’t want to think of it as an afterthought. We want to think of it as a foundational component into the DNA of the organization that we build.
  4. Full functionality. Security shouldn’t mean a loss of privacy, which is an issue we all struggle with. We feel, or are told, that we have to give up things (like data) to be secure. Privacy versus security,  which is a false dichotomy.
  5. End to end full lifecycle protection. Most privacy issues are avoidable. Whatever the weakest link is, in whatever chain, that’s the problem you are going to incur.
  6. Visibility and transparency are absolutely critical. We talk a lot in the Pharma and healthcare industry about data sharing and data democratization. When in reality, despite the current climate, democracy requires transparency and accountability to operate. Who has access? Why? What is the data used for?-These are the things that we should understand upfront.
  7. Respect for useruser centricity.  Any info, if linkable, can become PII of PHI. In the past couple of years, Michael has spent a lot of time with legislators all across the country about this exact point. There is a huge misunderstanding, even amongst legislators who create the laws, on how data works. There are all sorts of wrong ideas about things, like the idea that HIPAA is a privacy rule – it’s not; that HIPAA means data is protected, when in reality, once data is “de-identified”, using the legal definition, that it’s safe and secure. In reality, it’s the combining of disparate data sources where we get the most value from data, and also, the greatest peril as that is where things like re-identification become possible, if not probable.  We have to consider the individual, their level of personal control and freedom of choice, and that’s where legislation like GDPR, CCPA, etc,. come in.
  8. Finally, Michael’s final point is an expansion of that last item. The individual is the most important point. All data begins with a person, with a human, with a consumer.  Aggregated data comes from individual people.

→ Read about Customer experience in the financial industry

The practical differences in ethics for data and algorithms

Yes, there are differences but they’re inextricably related. Let’s begin with data and ask ourselves a few questions. What data do we have? How was it sourced? From Whom? To whom does it refer? What rights does the data provider have to give/share or sell you this data? What is the consent status of this data? What is the time frame of this consent? In other words, what practices are in place BEFORE we even have the data in our possession?

Algorithms simply help us categorize and make decisions about this data. Which is where things like bias show up. Bias can be human-based or data-based. For example: Are we making the right decisions about the right data? Are we making the right decisions about the wrong data? Are we making the wrong decisions about the right data? All of those things can occur.

On a side note, Michael shared about attending a dinner party and there was a conversation where people said things likeAI should be built to human ethical or moral standards. To which he replied, Which human? Which ethical standards?It’s his way of saying if you’re not having those conversations, you’re at the wrong dinner parties.

Practices generally follow these questions as well. Which is why data ethics is so much larger than people perhaps give it a credit for.  Often because we choose to apply it to one area; for example,I’ll apply ethics to my AI . . . but I have all this massive data that I probably shouldn’t have, in the first place.”

The billion dollar question: Who OWNS the data?

The answer is . . . there is no good answer. If we ask Michael legally, he’ll tell you it depends on where you are geographically. If you ask him practically, he’ll tell you that most laws, at least in the US, say whoever owns the media it’s stored on, owns it. 

Unless of course, you bought it from a data aggregator, in which case you have a license to use it, based on a predetermined set of parameters which you agree not to broach, for a predetermined period of time, after which you either give it back”,  which is honestly meaningless, or you verify you’ve destroyed it. Those are vestigial words and ideas when it comes to the digital world. 

Michael shared this example: ‘If data is on paper, and there is one copy and I burn that paper, that data is destroyed and cannot be reused. If however, there are infinite copies of that paper, then the destruction of one has zero impact on the usability of that data. Worse, what if I don’t even know who has those papers containing the data? And still even worse, is that maybe I never agreed to anyone having those papers, let alone that these other parties would buy and sell them to each other without my knowledge or consent.’

In terms of value, for years, we’ve heard the Data is the new oil trope. Which at its best, can orient the listener to the idea that data is valuable and market making, the way ‘sold’ is/was.

But it falls apart right after that point. Michael continued with his example:  ‘With oil, we have a clear sense of provenance, where it came from, how it was acquired, from where, from whom, and for what price. We know that if I have a barrel of oil, there aren’t infinite copies of it. It’s a physical object in the real world. If I burn it, no one else can. Oil, by the definition of economists, is a rivalrous good. Data, however, is non-rivalrous. There can be infinite copies of it (and often are) and the use of one of those copies doesn’t preclude someone else from using their copy of it.’ It’s what makes data so compelling and valuable.

If you look at GDPR, legally, it’s clear that the individual owns data. They are empowered to say who can have what, for what purposes, and can elect to say nope, you can’t have it.”

In the US, we’re on the other side. We regulate data using specific laws that depend on the type of data and the type of use. Medical data is HIPAA. Employment data is ERISA. Driving data, financial data, credit data, or pick something and there is a separate law or set of laws that govern it. So we have a patchwork of regulations with holes in them so big you can drive a truck through them. It’s these holes that the data industry uses to support the business models

For Michael personally, he believes that we need a simple test to determine who owns the data. It’s this: If the data in question is a byproduct of a good or service I paid for, in whole or in part, AND that data can be bought, sold, or shared with a party that I don’t have a direct relationship with, contractually or otherwise, it’s MY DATA.”

Here is a hypothetical example:  If I spend $600 for a pair of sneakers that tracks my activity, my steps, my heart rate, respiratory rate, body temp, geolocation, etc. who owns that data? Should I have a subscription to my data? A practical application of this is if I visit my physician for a physical, and receive an EKG, an Xray, and maybe I fill a script for a hypertension medication . . . where does that data reside? Who has access and for what purpose? If the purpose is not for my treatment or care, what happens to it? It’s a huge question that we can talk about for hours.

Data ecosystem is about transparency

Let’s look at some use cases. Michael shared more from the example above:  ‘I visited a Doctor for a physical. In the US, healthcare providers are consideredcovered entities under HIPAA. That means they have carte blanche access to ALL my data for the purposes of my treatment and care, and for their own healthcare operations, things like quality control, etc. That all makes perfect sense.’ 

Now, assuming we agree on that, Michael asked us to look at what happens next. Doctors, Nurses, PAs, NPs, technicians, and hospital administrators are all very good at their jobs. But they’re not good at other things. Like building massive EHR systems. So that falls to the tech part. So we have a massive industry of companies that build systems that collect, and in many cases, utilize the data collected by the healthcare providers. Good so far. 

The thing is, those companies don’t have the same rights to “USE” the data that the providers do. After all, your Dr. is caring for you, your EHR system is not. So they’re restricted in terms of their “USE” of the data. 

But here’s the rub, by stripping some info and packaging the data up, these parties can sell that data into a larger data value chain, where it is combined with other data from other sources; it may be aggregated or it may be used to make decisions about us. 

In the case of research, again, there are clearly defined corridors for research in most US laws. Usually, that is as long as it’s de-identified, something Michael mentioned earlier, or that it’s been approved for use by an IRB, that it’s ok for research. He, like others, actually wants to see a system that accelerates use of the data, not restricts that data. The difference is that it’s known who and for what purpose. 

So what would a transparent data ecosystem look like? Where individuals were empowered to willingly and knowingly participate in research and in data collection and use, but without the laborious machinations currently in play. What if research had a real relationship with a data subject or consumer? What if there was value to be exchanged directly at the level?

Michael shared that he believes that people have a right to know who has their data, what they have, and what is being done with it.  That a transparent relationship between data steward and data subject should be the base case, always. 

Data is a value. There is no refuting that. Yet the only people NOT participating in the data economy are the data subjects, the individuals themselves, the consumers. So what he was saying was: ‘What does the world look like if you change that? Clearly what I think is that we need to create a value chain that includes the individuals who are the basis of that value.’

What is data worth? 

The answer, and some people hate when Michael says this, it depends on the data in question, its use and a host of other factors. He took us through a thought experiment: ‘I’m a musician so I collect guitars. Let’s say a certain guitar is worth $500 new. Great, ostensibly, it’s $500 new, no matter where I get it. And it should be worth less than that if it’s used. Less still if it’s in bad shape. Now let’s say that guitar was played by Jimi Hendrix. Suddenly, it’s not $500, it’s $1M. Same guitar. Or is it? Let’s say it was played by Jimi, but somebody swapped out the electronics and repainted it 20 years ago. Still worth $1M? Probably less. So it’s dynamic.’

There are innumerable types of data. Personal/health data is highly valuable. Financial data is highly valuable. Perhaps you can argue that your buying habits are valuable, they certainly are to marketers. But not all data is equal.  Much of it is frankly, noise.

Michael actually worked with Avalon Health economics to look into this question. What is a PHR worth to an individual? To a community? To Industry? It’s a fascinating area, and frankly, one of the answers it settled was, that the value also depends on where in the supply chain you assess the value and to whom. 

What sorts of questions should pharma be asking about data?

Great question, Michael says, then he shared that he would begin looking at what data assets your company already possesses. What do you have? Where is it from? What does it contain? Where was it acquired? What is the consent status of this data? What use cases is it to be suited for and authorized for? Once we can understand what we have and understand our existing risks, we can move forward to the establishment of a data ethics framework within which we can apply it to legacy data as well as future data collection and use. 

Michael wanted to say something hereabout trust. A word that we throw around a lot. Pharma is NOT a trusted industry, in the US at least”. In fact, they’re vilified. The amount of misinformation and mistrust out there about what Pharma does is staggering. Which always leads to some cognitive dissonance. We, the Pharma and healthcare industry, are literally working day and night to try to make human lives better and then they hear all sorts of baseless nonsense. You’ve all heard your share, no doubt.

The point is, Michael says, we need to help ourselves here.” Michael Pierson, the Chair of Social Entrepreneurship at Fordham University said If people don’t know what you value, they impute cynically, that it’s only financial”.

Kristen Martin, the Chair of Strategic Management and Public Policy at George Washington University saidWhen people don’t trust you, you start to lack legitimacy until you don’t have a leg to stand on to opine about what others should do…”

If you look at things like COVID-19, anti-vaxxers and the like, you’ll see a lot of this is real. Michael suggested Pharma does itself a favor and gets ahead of this issue.

Facebook survived its Cambridge Analytica scandal. But Michael wasn’t so sure it would go well if it happens to Pharma or healthcare.

→ Read about Data science perspective on COVID-19: a real life example

Other articles

or

Book a meeting

Zoom 30 min

or call us+1 (800) 917-0207

Start a conversation

We’d like to hear from you. Use the contact form below and we’ll get back to you shortly.