Blood donor database leak
As a blood donor to the Australian Red Cross Blood Service (ARCBS) myself, this one is particularly saddening to report. Yet, it happened, and it was Australia's largest data breach to date.
The ARCBS became aware on 26 October that their outsourced Web server partner had allowed data to be exposed through negligence and lack of thought.
Specifically, the Web server had anonymous directory browsing enabled, and further, some person had made a backup of the MySQL database to the very public website folder itself.
How do you avoid this? Well, to start, you don't enable directory browsing on your website unless you really, actually, definitely, truly want that. Why would you?
Then, don't save backups to your public_html folder. Why would you do this, except possibly as a convenience to yourself for downloading the backup – but even considering this is not a good approach, to begin with, the file ought to be deleted after downloading. In this case, the developers left it in the public-facing folder.
Hats off to ARCBS who acted swiftly and arranged access to IDCARE for affected persons, but a solid smack to the developers who did it.
Recruitment database leak
What's worse than having a public facing website with a database backup stored within it, and turning on directory browsing? Let me tell you ... it's doing that, and then continuing to do it even after someone else has a widely-published data breach from the very same thing.
One month after the ARCBS data breach, the very same thing happened to Michael Page. The circumstances were identical – a person was trawling the Web for public websites with directory browsing enabled, discovered Michael Page had such a site, and exactly like ARCBS the developers chose to store their database backups in this public folder.
Yet, it's worse in this situation because news of the ARCBS data breach had been published, including the scenario how it came to be. Doesn't Capgemini's team read the news?
Australian census fiasco
What can we say? The Australian census could have been a monumental success of online surveying, setting a future path to electronic voting while reducing the manual cost of distributing, collecting and tallying paper-based forms.
The Australian Government selected IBM — at a cost of millions of dollars — to provide the census platform, despite IBM being blacklisted by the Queensland Government for its botched health payroll rollout in that state.
IBM implemented its own data centre for this purpose, rather than working with existing scalable online cloud-based platforms that already provide proven and tested elastic load-balanced facilities.
IBM and the Australian Bureau of Statistics reportedly tested the servers for a demand based on the assumption people would use the site equally throughout the day, without any consideration for the typical "census night" concept that sees most people filling in the form after dinner that evening.
To prevent "overseas attacks", while neglecting to consider Australians overseas, a simple geo-blocking configuration was applied to the router.
Yet, after 7pm, services began to fail due to alleged distributed denial of service attacks – which many online experts believe was simply Australians trying to fill in the census, not some actual attack.
In fact, the Senate inquiry even suggests the alleged denial of service attack did not even exist, but rather that IBM's systems displayed false positives.
IBM chose to reboot its equipment. Yet, IBM had incorrectly configured at least one of its two — yes, just two — routers in place. Part of this incorrect configuration involved changes made to volatile memory, that is the running configuration, which were not committed to non-volatile memory and hence after rebooting the router all these changes were simply lost.
As such, IBM lost its connection to Telstra's network and was now trying to run on a sole Nextgen link.
The ABS didn't help matters, either. Only after 8pm, did the ABS begin advising, through social media, that the site was experiencing an outage. By 11pm the ABS stated the census site would be offline for the rest of the day and would provide an update in the morning. It was not until two days later that the site was re-opened.
Where do we go from here in advocating how to prevent such a recurrence in your enterprise? The errors were many and frequent. Reporting systems failed. Systems were not provisioned correctly. Configurations were not saved to non-volatile memory. Load modelling was flawed. There was insufficient redundancy. The geo-blocking was flawed logic.
The Senate inquiry goes into detail, but for now, suffice it to say the biggest problem in this debacle was the trust the ABS placed in IBM. Alistair MacGibbon, special adviser to the Prime Minister on Cyber Security, sums it up: "In many respects, while I will say to you that this was a failure to deliver on the contractual obligations that IBM had, there was a failure on the part of the ABS to sufficiently check that the contract had been delivered. That could have been achieved through more thorough assessments of the work done for them by IBM and their subcontractors."
John Podesta's emails
Presidential hopeful Hillary Clinton's campaign director had 50,000 emails lifted when a hacking group sent a phishing email on 19 March. In classic phishing fashion, the email insisted there was a problem ("someone unsuccessfully tried to log in") and thus an urgent action was needed ("you need to change your password now") and provided a link to do it ("click here").
Of course, hovering over the link in a phishing email shows the destination is not the site you believe the email is from. Nevertheless, sadly, despite this, people continue to be deceived by phishing emails around the world, and thus they continue to be sent.
Yet, in Podesta's case, his chief of staff had the savvy to write to the Clinton operations team help desk asking if the email was legitimate. A foolish staffer replied “This is a legitimate email. John needs to change his password immediately." As we now know, following this event, tens of thousands of emails from Podesta's account were accessed without authorisation and revealed.
I suppose this is one help-desk staff member who may find future employment difficult. It is very disappointing that the chief of staff did the right thing and requested advice from people paid to be tech savvy, and the response was technically erroneous. We try to teach users not to click on every darn link they get, but what can you do when your own help desk aren't astute enough to recognise phishing?
Of course, why was Podesta using a free mailbox for critical business and sensitive emails? That's the other stupid mistake in this scenario.
What could have prevented this? Firstly, I can't endorse people storing company or sensitive emails in their personal mailboxes. If they don't trust their email administrators or the security of their own systems, then that's a problem to deal with.
Next, all staff need to be trained to stop and think before clicking links. This is particularly painful in this case because the non-technical staff did stop and think! They were not confident the email was genuine and requested support. The help-desk staffer grossly let them down by failing to appreciate basic phishing techniques themselves, let alone question whether Google would really send an email like that, and thirdly by not even hovering over the link to see where it led. This last step is basic and anyone can do it.