Karen Habercoss, chief privacy officer with the University of Chicago Medicine and Biological Sciences Division, talks to Decipher about the unique data privacy and security challenges that the healthcare sector faces, how privacy and security teams are working together to defend against various issues like ransomware and data leakage and how organizations are approaching areas like AI and identity management.
Lindsey O'Donnell-Welch: How did you get into this role? Did you come more from the medical side or the data privacy side?
Karen Habercoss: I actually started on the clinical side of the house here at University of Chicago. I was a clinical social worker in the Department of Psychiatry for quite some time, more than 10 years. Then, I, along with a few of my colleagues here in the University of Chicago, left to go work for a healthcare startup company. When you work for a startup you wear lots of hats, so I was originally hired to do some clinical work, but then ended up taking on compliance as one of a number of departments under my purview. That's how I made the switch from clinical social work practice to regulatory compliance in general. And then I really began to specialize in the privacy area of regulatory compliance - there's lots of different pieces to compliance - so I have worked in healthcare my entire career in some form or another.
Lindsey O'Donnell-Welch: What are the different compliance frameworks that healthcare organizations need to navigate from a privacy perspective?
Karen Habercoss: From a privacy perspective, in the beginning everyone was obviously very focused on HIPAA [the Health Insurance Portability and Accountability Act], and that was the federal regulation that we all rallied around. That being said, privacy officers and leaders today have lots of other privacy regulations aside from HIPAA, especially in healthcare. A lot of it depends on the types of data that you have, and the purpose for which you're using it. So, for example, many privacy officers today who work in healthcare, they certainly work with patients in clinical care, and that is HIPAA.
But then you have these expanded roles, where health systems have insurance - so group health plans that they're running either on the employee side or running health plans for patients, or you have value-based care - those kinds of things. In some cases, that is also HIPAA, but in other cases, you're thinking about the employees, so then you're starting to think about state laws, and the data you hold, not even related to the employees' health plans, but when people apply for jobs, or they take jobs for you, or they no longer work for you. That is really falling more to state laws.
I work in academic medicine specifically, so we do human subjects research, so in that case, HIPAA might apply, or state laws might apply, or even international laws might apply, because if our researchers are collaborating with researchers in the EU, then that research data might fall under GDPR. We also have international patients or students, or clinical providers, and in those cases we need to look at what are the countries that those people are representing, and what privacy laws do they have as well, in terms of the data we're collecting.
"Everybody wants data, from all places, so it's really important for the privacy leader to be able to understand things like: What is the data your organization holds, and what are the data flows, and what are the purposes?"
Lindsey O'Donnell-Welch: What areas across the organization do you need to collaborate with to better understand all these different streams of data?
Karen Habercoss: Privacy leaders today have to really instantiate themselves into the business and the operations. Some of the departments you might work with are fairly obvious: Things like health information management and medical records, the information security office, clearly you'll want to partner with those people very closely. IT, because everything is electronic and technology based now. But you really need to also engage with human resources, or your ambulatory operations, or, very recently, digital marketing and communications - those things are very important now, because as part of digital health, the marketing departments really want to get that information out to patients.
Everybody wants data, from all places, so it's really important for the privacy leader to be able to understand things like: What is the data your organization holds, and what are the data flows, and what are the purposes? Because that's really the main part of privacy - to understand what's the data, how it is being used and accessed, and what are the purposes.
Lindsey O'Donnell-Welch: What are the unique challenges in this specific industry when it comes to data privacy and cybersecurity?
Karen Habercoss: These are things that are being talked about everyday. The obvious one is artificial intelligence. So these are things that every healthcare leader and privacy leader must have an understanding of, in terms of the vendors that you want to use, because almost all applications and tools that are being purchased have some sort of AI component, so you want to understand that. You want to understand who your vendors and third parties are, so that you can understand what kind of data they're getting as part of the relationship and what kind of data they want to use.
The other thing is this concept of data leakage, which is really the key to all of privacy. How is the data flowing, and where could there be potential for it to be leaked outside of your organization? Is there a potential breach or incident that needs to be reviewed and buttoned up at that point? We talked about state privacy laws, and there are some situations where HIPAA may not apply, and state privacy laws might apply. Right now, outside of HIPAA, we don't really have a federal data privacy comprehensive law, and HIPAA is older now. Back in the time when HIPAA came about, there was not really a contemplation of all these emerging technologies that we have today, and we haven't been able to put out a federal law other than HIPAA, so what we're left with are these multiple state laws, and international laws. Those kinds of things are really going to begin and continue to expand, and that poses problems for privacy officers, because you might have patients coming from different states or countries, and you need to really understand all of the different laws that can apply.
"Never has there been a time when privacy and security really have to be more completely in alignment with each other."
Lindsey O'Donnell-Welch: How do privacy and security teams work together within healthcare entities?
Karen Habercoss: Privacy used to be a compliance function, and to an extent, it still is. That being said, never has there been a time when privacy and security really have to be more completely in alignment with each other. You really can't have one without the other, and they can't be operating in silos. So this concept of a coordinated partnership between the two has to exist. I'm part of the Health Sector Coordinating Council Cybersecurity Working Group, and we have developed a resource about coordinated privacy and security partnerships, and really what does that look like, what are the themes where the challenges exist, and how do you find best practices, so that you can coordinate privacy and security functions in a way that keeps patients' data confidential and secure, you're complying with the regulations, patients have an understanding of their rights, and at the same time from a cybersecurity standpoint, your patients are staying safe, because really cybersecurity is a patient safety issue as well. That can be addressed when privacy and cybersecurity are functioning together. From the cyber side, it's not really a secret that ransomware and those types of things are increasing in frequency and severity, and they really need to be addressed as best possible, and they are addressed well when privacy and security work together.
Lindsey O'Donnell-Welch: How do you see threats like ransomware attacks - such as the recent attack on UnitedHealth Group's Change Healthcare - from a privacy perspective?
Karen Habercoss: I think in my mind it really goes back to the relationship that the privacy and security officers have. Security deals with the technicalities - how do you technically keep patients safe, when they're on connected devices, how do you keep email safe, from a technical and training standpoint. There are a lot of ways that privacy and security can work together. I can give you a few examples. If the security team is tasked with protecting email systems - like phishing is a good example because that's one way ransomware can occur - In those situations, you might say, "OK, the security team has 2FA on their systems," but the privacy team can really step in and reinforce those things. When we are going out there to the business and doing training, when we're in meetings, we can reinforce those kinds of things, and say "here are the kinds of things you can look for in a phishing email," we can combine policies and procedures between privacy and security. For things like identity and access management, the privacy and security teams can work together to understand what is it that these roles need access to in terms of data, and how can you work together to make sure the roles make sense, from a privacy standpoint, meaning people are getting access to only the data they need to do their job, and then on the technical security side, that the access is given only at that level.
"AI and other emerging technologies have great benefits, so I am definitely on the side of thinking these are going to be good for healthcare. That doesn't mean it's not without risk."
Lindsey O'Donnell-Welch: How is AI being implemented in the healthcare sector and how do you approach this from a privacy and security standpoint?
Karen Habercoss: AI and other emerging technologies have great benefits, so I am definitely on the side of thinking these are going to be good for healthcare. That doesn't mean it's not without risk. The first step I would say is there needs to be strong governance around this, and a strong understanding by both privacy and security of the use cases. So I think it falls into a few buckets. There are three main things I think about. The first are internally developed AI models. Those are areas where the health system itself is developing their own AI models. Those require reviews of validity, and an understanding of if there’s integrity for how it’s functioning, things like that. Usually because those are developed internally, they're being used within our own systems, things that we've already been able to review from a security or privacy perspective, and we think those systems are secure, and if there was a privacy impact we've reviewed what that looks like.
But then you have these other types of AI tools that health systems purchase from vendors. Those really need to follow a very strong process of governance, so how does the business understand the policies around those types of tools that are chosen? Do they align with the mission, vision and values of the organization? Do they align with some larger digital strategy that you're trying to pursue? If the answer is yes, then it should go through a very strong vendor contracting process. Legal needs to be involved in reviewing it from a contracts perspective, privacy needs to review it to look at how the tool functions, how the vendor keeps the data private and secure so there's no data leakage. There may need to be a privacy impact assessment that's required, looking at what are the risks and how are you going to mitigate them on the security side - all those things need to happen that would happen with any purchased tool, application or service.
The most risky, in my mind, are those that are publicly available. From a privacy perspective, those make me most anxious, because we don't want regulated data, like personally identifiable information, to make their way to publicly available AI tools. Those wouldn't be secure. We want our staff to use the approved and sanctioned and contracted tools that we have; but we don't necessarily want them to use publicly available tools when sensitive or regulated data is involved. Helping our staff understand the difference and training them about that is key.
Lindsey O'Donnell-Welch: How do you view identity and access management in a healthcare or medical environment, given the breadth of users across the organizations, including researchers, patients, and employees? How do you begin to approach an environment like that?
Karen Habercoss: We need to clearly understand who the users are. Who are your internal users? Who are your employees, what departments do they work in? What roles do they hold? In healthcare, you need to account for patients and patient portals, and how they get access to things. And external researchers or vendors you're working with, how they have access to systems. You need to look at, what is the access to the data, what are the systems that each one is accessing? What types of endpoints are they using? They might be different - patients are coming off the web or from their devices, and it’s the same thing for vendors, and employees are using hopefully sanctioned devices. We want to understand what is being used. That's where the identity and access management comes in. We have to understand the work environments, and the environments that each person is using, and then again in my mind it goes back to very strong governance. I'm a big proponent of having written, auditable and documented processes and procedures for each of these things that you can always point back to. And then if someone needs an exception, there's an exception process for that, and a correct level of eyes are on that if it's going to be granted.