Blog / Articles

How Secure is your Intellectual Property?

"I attended a dinner & debate with journalist & author Geoff White recently" ... meaning ... I joined a Zoom and listened, watched & debated (by text chat) whilst I ate my microwave dinner in front of my PC. Summer 2020!
Geoff is a respected investigative journalist(*) and author. He's just published his latest book: Crime Dot Com: From Viruses to Vote Rigging, How Hacking Went Global (
Matching the content of the book, Geoff spoke of the continuing rise of hacking (from 1970s students to modern-day political and cyber-war threats). It's a topic that touches all of our lives, from threats of identity theft and financial fraud, through to industrial espionage and country-on-country activities. The debate was lively (as it usually is at this monthly forum) and ranged from political surveillance to organised crime.
Spurred by the debate, I began thinking of my d-wise clients and their susceptibility to cyber-crime. D-wise does not specifically provide advice about cyber-crime in our assessments. Generally we cover infrastructure, tools, solutions, data management, information security, business processes and submission-readiness, but not specifically cyber-crime. I began to reflect on my experience in the finance industry and compare it with what I've observed in life sciences.
In finance, I was used to the idea that my manager would be informed if I sent an email attachment to an external address that was not in a white list. I knew that use of the USB socket on my laptop would also be likely to create an alert for my manager (viz: when my wife innocently plugged her Fitbit in to my company laptop to charge it up). And my ability to upload material to cloud storage services such as Google Drive and Dropbox was simply blocked and unavailable to me. My team managed a fraud intelligence service whose sole means of access was via Citrix (with Print Screen and external copy/paste disabled), and had only one means of getting data in or out. That mechanism was fully monitored, and getting data out required coordinated actions by a minimum of two people.
Randomised clinical data may not carry concerns over personal identifiers or tipping-off of fraudsters, but it does contain your intellectual property and is market sensitive. Your information security team is probably taking good care of a range of threats through good practices with firewalls, penetration testing, and two factor authentication, but who is making sure your valuable data does not escape and get into the wrong hands?
You may feel that some of the financial industry techniques I described above impinge on civil liberties, but they are successful in preventing unintended and unwanted leakage of data.
Who gets alerted at your company if a study’s unblinded data leaves the company network and is at risk of being shared with a competitor or a market investor?
Geoff described a social engineering technique of spoofing corporate emails, and including senior people in the message. It's not hard to make an email look like all other internal emails, and to get a copy of an organisation chart. Imagine your reaction upon receiving this message:
From: <CEO>
To: <your boss>
CC: <you>
Title: URGENT: Intermediate results required for ad-hoc board meeting this afternoon
<your boss>, pls upload latest outputs for <study> to my Google Drive so that I can use it this afternoon in ad-hoc board meeting. Sorry for short notice. This is critical. I know you and <you> won’t let me down.
My Google Drive:
It's easy to think you wouldn't be hood-winked by this kind of message; but it is also very easy to be hood-winked by these kind of messages.
One of the mitigations that we're keen on at d-wise is the adoption of task-based access. In essence, this concept encapsulates the idea that individuals do not have full access to a study's data, they only have access to the data required for their current task.
Moreover, rather than being granted access to specific tables and files, maybe the ideal solution is to provide the individual with all of the artefacts they need in a virtualised environment (such as a container). Fill this "briefcase" with an environment that has the language(s) they need, the pre-existing programs they need, and the input data they need. And when they're done, check-in the changed code and data files, and throw away the container. If you control who can access the content of the container (briefcase) then you don't need any additional security on the files within the container.
The task-based container approach is a solution that prevents wholesale egress of study data without impinging on civil liberties. What do you think? Have you suffered from data leakage? Do you adopt explicit techniques to prevent unwarranted data egress? Do your partners and vendors adopt adequate standards?
...My microwave dinner? Sausage and mash, with onion gravy. It was lovely. Washed down with a glass of chilled Frascati.
(*) “respected journalist”: is that a contradiction in terms?

View All Posts