There is no doubt that contact tracing apps can play a key role in crisis management especially as social distancing measures are lifted in countries across Europe and the rest of the world. In this guest blog, Dr Iain Keaney talks about solving the contact tracing privacy paradox with decentralised AI. He outlines how decentralised AI can preserve anonymity and solve privacy issues, not just in contact tracing, but as a business standard for AI, going forward.
Years of mistrust
Mistrust in technology has steadily grown over the past decade, but Edelman research shows that through transparency, trust can be restored. Fear currently rules that the very technology proposed to control the spread of a pandemic and free us from lockdowns, could also be used for surveillance, which is anything but liberating.
Trust in technology has taken a nosedive over the last 10 years, particularly around processing data. The 2013 Snowden era revealed how governments were violating the privacy of its citizens, information which went on to shape the EU General Data Protection Regulation and which triggered a new era of data privacy and transparency.
However, 2016 brought to light the Cambridge Analytica scandal on the US presidential campaign and later, on the UK referendum, showing how data can be used not just to target individuals, but for global manipulation, targeting society as a whole.
Roll on to 2020 and we are faced with a situation where data and contact tracing can help us overcome a huge problem, tracking the spread of disease in order to slow it, and allowing countries to emerge from lockdowns and restart their economies. However, trust in this technology is at a low point and uptake in countries across the globe has been poor.
The privacy paradox
The problem with surveillance is that it challenges freedom, so when we talk about contact tracing apps, it’s natural that there are concerns about privacy issues and abuse or repurposing of the technology later. Such concerns lead to low adoption and countries such as Norway deciding to suspend their Covid-tracing apps due to privacy issues. For such technology to work effectively and have an impact on the pandemic, as it has done in China where the tech was automatically incorporated into popular apps, adoption rates need to reach around 20%. Few countries have seen such levels of voluntary adoption, so how can we restore enough trust to encourage higher adoption rates?
There has been progress, with the EU just announcing that the Commission is setting up an interoperability gateway service linking national apps across the EU. It will see testing runs between the backend servers of the official apps from Ireland, Italy, Czech Republic, Denmark, Germany and Latvia. The gateway server is developed and set up by T-Systems and SAP and will be operated from the Commission's data centre in Luxembourg and is expected to be launched in October.
The service follows the agreement by Member States on technical specifications to deliver a European solution to ensure a safe exchange of information between the backends of national contact tracing and warning apps based on a decentralised architecture, something that I will now explore in greater detail.
The Solution: Decentralised AI
It becomes clear how big an issue this is when two of the tech world’s biggest competitors decide to come together to solve the problem. Google and Apple began collaborating to produce an API based on some key principals:
- Decentralised data storage
- No mass data collection
- No location tracking
These were the same factors found in a German study to boost the likelihood of adoption, alongside voluntary use of an app. Transparency is key to giving people confidence. This decentralised approach works by storing randomised key codes from users of the app that come within proximity of each other. If a user reports COVID-19 symptoms, their key codes are flagged on the server and any matches are alerted.
This system works well because no other information about the users is stored, and codes are regenerated every 10-15 minutes so you can’t easily get an overview of someone’s complete interactions or see exactly where they’ve been. This future proofs it against misuse.
Collaborative machine learning
Taking all this information, can the same principals be applied in a wider context for AI in business? The answer is yes, the concept is known as Federated Learning, which is a collaborative method based on decentralised data. This method allows for greater data privacy while simultaneously allowing for greater personalisation and improved overall learning. Predictive text is an example of this, notice how you must retrain it whenever you get a new phone?
It works by downloading a current model, learning from local data (i.e. your phone inputs) and sending an encrypted summary to the cloud where it is averaged with other user data and used to update the shared models. This means personal data never leaves your device, but the whole system still learns from your inputs.
Decentralised AI is basically data science without the collection of data. It’s a key step in creating trustworthy AI since it emphasises data security and privacy, and can be applied in a variety of ways, from contact tracing apps to medical diagnosis models and much more.
Bottom line, if we want to maintain trust in AI and build ethical solutions, we must use methods that decentralise data and have privacy built-in by default.