As with an expanding list of gadgets and appliances used in daily life, diabetes devices, too, have shifted toward the “internet of things,” an environment in which common objects are designed to communicate with the internet, as well as with each other.
Although this type of seamless transmission of information provides obvious convenience and effectiveness, it also leaves areas of vulnerability. Because many devices used in diabetes management — including blood glucose monitors, continuous glucose monitors, insulin pumps, artificial pancreas systems and wireless insulin pen delivery systems — utilize wireless internet communication, the diabetes community has begun to discuss ways of protecting these devices and the people who use them. Endocrine Today spoke with David C. Klonoff, MD, FACP, medical director of the Diabetes Research Institute at Mills–Peninsula Health Services in San Mateo, California, and editor-in-chief of the Journal of Diabetes Science and Technology, about this emerging and important issue.
David C. Klonoff
To what extent are diabetes devices vulnerable to a cyberattack?
Klonoff: Right now, we’re seeing a trend in the world — increasingly, every single object we interact with is making some type of measurement. That measurement is performed by a sensor, which is then able to transmit information about what it measured, wirelessly, to a remote computer. We’re seeing this in virtually everything we deal with, and that includes diabetes devices. Each type of device that sends information to the “cloud” is susceptible to a security breach. The information needs to be correct, timely and protected. If it’s not, you have a problem with security and therefor with safety.
Have any cyberattacks on diabetes devices occurred to date, or is this a theoretical issue at this point?
Klonoff: There have been cyberattacks performed by security researchers under controlled environments on devices worn by mannequins. No attacks on actual patients have been reported, and there are several possible reasons for this. One could be that, fortunately, it has never happened. Another could be that the patient did not realize it was happening, regardless of consequences. A third possibility is that an attack was recognized and reported to the manufacturer, the FDA and the Department of Homeland Security, but the decision of the regulating bodies was to not make it public. I do know that FDA and DHS have been investigating the risk, but neither has reported publicly whether the risk is theoretical or actual at this point.
What was learned through the testing you mentioned on mannequins?
Klonoff: A security researcher, on several occasions, has bought a used insulin pump and hooked it up to a mannequin. He was able to use a hacking device to override the normal controller and program the pump to suddenly deliver a massive, potentially fatal dose of insulin. No one was harmed, but the experiment showed that if this had been a human instead of a mannequin, that person would have been in a dangerous situation.
How is the possibility of a cyberattack specific to
these devices being addressed?
Klonoff: For every endeavor, people in their respective fields are looking at how to make their products secure. In the automobile industry, people are working on secure automobiles that cannot be hacked, and the same for the airline industry. In the diabetes field, we cannot wait for somebody at the top to say, “OK, this is how everything is going to work.” Instead, what I have been working on is an approach from the bottom up, looking at the devices and finding some type of standard that can be used by manufacturers to ensure that the devices have good security and privacy.
I lead a project called DTSec, which stands for Diabetes Technology Cybersecurity Standard, developing a set of principles for features that a connected diabetes device should have. The standard is going to be discussed at a medical device cybersecurity meeting in May in San Jose, California.
What types of security requirements are outlined in the DTSec?
Klonoff: The DTSec contains two types of security requirements; one is a performance requirement and the other is an assurance requirement. Performance requirements say that the device needs to have certain features to indicate that the data will be saved and faithfully transmitted. But it isn’t enough for the manufacturer to claim to have those features: Every time there has been a security breach in any industry, the company said “we’re secure” before that happened.
So, the other necessary part is assurance. We set out procedures to be conducted in a qualified test lab for a company to demonstrate an assurance of security. The company would reveal under nondisclosure to the test lab what the architecture of their security system is, and the test lab would evaluate it. If the product meets the necessary security features, it would have a seal of approval verifying that it has good cybersecurity.
How close is such a standard to widespread implementation?
Klonoff: I expect it will be happening soon. A couple of drivers will move the industry toward this standard. One of them is regulatory. It’s possible the FDA will become stricter. Another issue is that if there is a security breach, then a company might have liability if it cannot demonstrate that it had good security. You can’t protect a system from every kind of damage, but the standard says you have protected the product from reasonably expected security breaches, and so meeting the standard would provide reasonable assurance. Companies want to avoid lawsuits, and we are going to see pressure from insurance companies.
How might a security breach of a diabetes device affect patient care?
Klonoff: In the security field, three main types of problems can happen when you do not have good cybersecurity. The acronym is CIA: confidentiality, integrity and availability.
Confidentiality means your personal information is not transmitted to places it does not belong. Integrity, as it pertains to data, means the data are faithfully transmitted. Availability means you have access to the information at all times. If there’s a blackout for a certain period, and you don’t have the information you are depending on, you may as well not have the device. Those are the three ways a cybersecurity breach can cause problems.
Of course, someone could say, “Well, there haven’t been any publicly reported cases, and why would anyone hack a patient with diabetes? Let’s not worry about it.” I can only say this: Just because something hasn’t happened doesn’t mean it’s not going to happen.
Is there anything patients
using these devices can do to protect themselves?
Klonoff: Johnson & Johnson sent a letter to patients in the U.S. and Canada advising that users of the Animas insulin pump can adopt the following measures in lieu of upgrading security at this point:
Set vibrating alerts for the pump: Turn on the insulin pump’s vibration feature, which notifies users that a bolus dose is being started by the meter remote. This gives the user the option to cancel any unwanted bolus, and of course, it is possible to change basic bolus and basal settings from the pump itself.
Watch insulin history: Johnson & Johnson urges Animas users to keep tabs on the insulin history records inside the pump. Every insulin delivery amount, whether triggered by the meter or the pump, is recorded in this history and can be reviewed for any concerns.
Turn off the meter remote feature: This will, of course, stop the radio frequency communication between the OneTouch Ping meter and the insulin pump, meaning users will not be able see blood sugar results on their pump or use the meter to control bolus dosing. Instead, patients would have to manually key in blood glucose measures on the pump, and bolus from that device.
Limit bolus amounts: Those who want to continue using the meter for remote bolusing can use the pump’s settings to limit the maximum bolus amount, the amount delivered within the first 2 hours, and the total daily dose of insulin. Any attempt to exceed or override those settings will trigger a pump alarm and prevent bolus delivery.
much of an investment should companies make toward meeting a cybersecurity standard?
Klonoff: There is no such thing as absolute security. Saying we want to eliminate security risk is like saying we want to eliminate crime or pollution. There are things we can do to decrease it, but we cannot absolutely prevent it. Using the example of crime, you can put a lock on your door, you can put a fence around your house, you can put a wall around the fence, but if you’re up against an extremely determined opponent, it could be that almost no matter what you do, you will fail. It is a matter of what is reasonable. There is a certain point where it’s not practical, you can’t afford to insure yourself against everything. You want to make a small investment that will protect you from most problems, and then you wait to see if that is enough. There needs to be a balance between cost and security, but in my opinion, right now, we need to be doing more to ensure security. – by Jennifer Byrne
For more information:
David C. Klonoff, MD, FACP, can be reached at 100 S. San Mateo Drive, Room 5147, San Mateo, CA 94401; email: firstname.lastname@example.org.
Disclosure: Klonoff reports serving as a consultant for Insulet, LifeCare and Voluntis.