Towards the end of 2014 all we seemed to hear about in the technology media was the latest nugget of information to come out of the Sony Hack. The stories mostly concerned celebrities and highlights of who happened to be the biggest divas in Hollywood, along with some embarrassing email chains involving comments from senior executives.
Which is interesting, and / or scandalous, depending on your perspective, but mostly information we could be flippant about. What we could be less flippant about was the thousands of employee social security numbers that were laid bare. These were innocent employees whose private data was now out in the open.
The question we began to ask ourselves was, who will be next to fall victim to a hacking scandal? And given arguably the worst part of the Sony hack was this private employee data leaking, what could the consequences be if the next big hack was at a healthcare organisation with even more sensitive personal information? Unfortunately, we didn’t have to wait to long to find out.
Accessing confidential patient data
In February, news broke of the security breach at US health insurance company Anthem, when stolen user credentials are thought to have been used to gain network access and steal sensitive data.
Stolen data within the healthcare industry could not only damage an organisation’s reputation and result in many potential lawsuits, there are obvious implications for the rights of patients and customers, not to mention significant repercussions in terms of government regulations. In the US for example, the HIPAA Privacy Rule sets the national standard for the security of the electronic protected health information. And as we are increasingly digitising information, this propensity to see internally sourced security breaches is only going to grow.
As with both the Sony hack and the Anthem breach, rather than the culprit being a clever tech-whizz hacker getting a hold of the information, both instances were initiated through unauthorized users acquiring and misusing employees credentials to secured systems.
Moral duty to protect patient information
With such a high number of serious breaches happening, are IT departments within these organisations doing something about it? With any healthcare organisation’s moral duty to protect patient data, you would expect healthcare to be better at tackling insider threat, but unfortunately that does not seem to be the case. The fact is that the majority of security breaches come from internal sources, and the healthcare industry is worst than most with double the number of internal security breaches than the average of other industries (according to IS Decisions’ research of 500 IT decision makers).
Not only is the data within the healthcare industry highly confidential, the stakes are arguably higher than in any other sector due to the sheer volume of data many organisations possess, and the nature of that information. The consequences of a hack at a healthcare organisation will undoubtedly involve innocent victims and nasty lawsuits, just as we’re already seeing from the Anthem breach. With millions of patient information stored, this could cripple a healthcare organisation.
Especially considering the strains being put on the healthcare sector currently, particularly within the NHS, the financial implications of this risk cannot be ignored. You would expect finance directors and board members at any healthcare organisation to be sitting up straight and paying attention.
Mitigating the risk of Insider Threat
So, what is the best way for such an organisation to mitigate the risk of an insider threat? The first line of defense is to make sure all login rights are controlled and monitored according to the business requirements and role of the user. The phrase ‘never trust, always verify’ is becoming more and more popular, and although it sounds like a harsh attitude to have, research shows that IT managers are beginning to see it as their best option, given the employee is often their greatest security threat.
Through controlled login rights an organisation is better protected but the user is in no way limited. It’s a simple case of the employee having the flexibility to work as normal, but with no more access than necessary available to them.
What’s more is this is not just about protection from malicious employees, careless user behavior is a common source of security breaches. Users are human beings, they are flawed and make mistakes, there will always be instances of users acting outside the boundaries of policy (and sometimes common sense). That’s why stronger enforcement of policy is so vital.
It’s also important that we consider employee education as another level of defense within an organisation. By explaining to employees why their behaviors are so important in reducing the risk of security breaches they are far more likely to think twice before sharing their password or fall victim to social engineering.
Naturally, no single security policy is perfect, so the more layers you can add, the smaller your vulnerable ‘surface for attack’ is, and the more chance you give yourself of catching a breach before any real damage can be done. A good balance between user education and technology is likely to have the best results, and preferably a technology set that will strengthen logon security to prevent unauthorized access to networks, while deploying user alerts triggered by suspicious behavior. Tools such as UserLock and FileAudit enable greater control on the administration side, out-rightly restricting some of the careless user behavior, as well as helping educate and disseminate good behavior through alerts and notifications.
Healthcare organisations have a duty to safeguard patient and client personal information and it’s important to understand that the insider threat will never disappear, so the strongest strategy possible must be deployed to mitigate that risk.
A version of this article originally appeared in Hospital Management April 2015: A bi-monthly publication for both private and NHS hospitals throughout the UK.