Covid-19 Exposure Logging: Key Privacy Considerations – Digitalmunition




Featured Covid-19-Exposure-Logging.jpg

Published on September 3rd, 2020 📆 | 7750 Views ⚑

0

Covid-19 Exposure Logging: Key Privacy Considerations

Recently, both Apple and Google released new updates for iPhone and Android devices. One feature that was added was “Covid-19 Exposure Logging.” The feature is off (for now), and according to the text that accompanies the app, when turned on, it is set to communicate via Bluetooth to other devices.


iPhone

 

Android COVID Logging App screenshot
Android

 

My initial response was that we are once again reminded that we truly do not fully “own” our technology, and as long as we want to participate in the always-on community, we are a component of the efficient functioning of the overall product. It is somewhat ingenious and covertly sinister to use Bluetooth as the communication mechanism. Contrary to the advice of some security folks, so many people have fitness trackers, headphones and smartwatches connected at all times that it would be impractical to turn Bluetooth off.

One has to wonder about the timing and proximity mechanisms that would trigger an alert. For example, if you are stuck in a car in a traffic jam or at a long traffic light, will the closeness of another car qualify as an “exposure event”? How about if you are on a slow-moving train and another train slowly passes in the opposite direction? (Welcome to rush-hour in the big city!) Social distance rules dictate a six-foot safety gap, yet Bluetooth version 1 functions at a thirty-three-foot range, and newer versions exceed that distance. Like all things Covid-related, it seems that we are building the airplane as we fly it.

Here at Tripwire, we love our community of InfoSec experts, and we are always open to other ideas, and this one, in particular, piqued our curiosity. While we understand the need to control the spread of this pandemic, our security mindset was raised to a new level. I asked some security experts how they felt about the involuntary addition of an application, its sole purpose being only to track a person’s movements. Here are their responses:

Tyler Reguly, @treguly

I’m very impressed with the COVID-19 tracing apps that are using the built-in functionality provided by Apple and Google. It is the first time that I can remember that every security and privacy enthusiast I know on social media has appeared to agree on a new technology. I haven’t seen any negative posts among people that I trust related to the feature. There are limitations, particularly with regard to older devices, but I understand those limitations. While we tend to push technology as long as we can due to cost, it is true that technology becomes dated and that upgrade options are limited. In Canada, I feel that there could still be more done to raise awareness to the tracing app and its configuration, but word of mouth seems to be relatively effective in this case. The question is how many people are refusing to turn it on? The app can be very effective, but only if people use it. How do we convince people who don’t believe in COVID-19, masks or the pandemic to install an application and enable a feature when they’re already convinced of multiple anti-technology conspiracy theories? Much like most of enterprise security, the issue is not the technology but the end-user. I feel safer with the app installed. I’m venturing outside again thanks to the belief I have in the power of the app, but how do we convince everyone to do this when the world can’t agree on less divisive topics?

An Anonymous Contributor

Whilst it’s great to hear that Apple/Google collaborated on this project, there are still some questions on how effective they can be. I don’t have the answers because it’s a hard topic. Sharing movements, tracking interactions and essentially mapping our lives is a scary thought. We’ve seen time and again that when this information is shared voluntarily with the belief that it will be used for one purpose, it is either lost, stolen or used in a way that wasn’t voiced.

I love the idea that an organization is building a solution to help, but historically, that trust has been broken over and again. You may argue, “Well, this information isn’t going to identify me”; however, we know that data aggregation isn’t some futuristic thing. This is used daily; it’s the reality of our world.

I do appreciate the work Google and Apple have done. I love that they have embedded security standards that developers must follow to gain access to the API. However, it still takes a small failure here, a forgotten about thing there, to cause harm.

It is good you can volunteer to provide information. It is good you can opt out and delete historic data. It is also good it’s an opt-in service, I.e. off by default. However, I’m not going to say I am at a point where I trust this will be used completely anonymously and safe.

I have also not touched on the following: how do we verify that the user who is submitting the illness report is being truthful? Most reports are saying this is validated by their mobile number entered and a code provided. However, that’s still a single form of validation. It’s the one person. What if they’re lying to spread fear or simply think it’s “funny”?

I realize that my mobile phone is tracking all I do. It’s already monitoring me. However, I’m not at this point convinced that the best approach is to specifically enable a feature that will identify all persons I have been near. Maybe after it’s been proven safe and effective, I will come around, but history tells us this can also lead to disaster. It’s concerning. I don’t want to normalize this tracking.

David Bisson, @DMBisson 

It’s unclear to me why Apple came out with this feature when it did. On August 24, 9to5Mac.com found that only six states—Alabama, Arizona, Nevada, North Dakota, Virginia and Wyoming—had committed to using Apple’s Exposure Notification API for COVID-19 tracing. (Pennsylvania and South Carolina indicated that they would eventually participate.) Information for the rest of the states was not available. As reported by PolitiFact, seventeen states indicated that they didn’t intend to use the API, while the rest didn’t respond.

That begs the question: why create this API without the commitment from more public health authorities that their states will ultimately use it? In the absence of greater adoption and coordination with public health authorities, this API could needlessly expose users and their devices to potential attacks without providing a meaningful benefit for the majority of Apple’s user base in the United States.

 

What are your thoughts about this new tracing application?  We would love to hear your comments over at @TripwireInc.


Editor’s Note: The opinions expressed in this article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.


Source link

Tagged with:



Leave a Reply

Your email address will not be published. Required fields are marked *


loading...