Google Docs Scam Highlights Phishing’s Low, Low Bar

In-brief: There were a thousand reasons not to click on that Google Docs link…but thousands of people did anyway. Why?

Two days ago, I was one of thousands of Google users who received a super squirrel-y email message with a link to what appeared to be a Google Doc. I declined to click on that link for about 1,000 different reasons, some of which I’ll enumerate below.

But the fact is that many recipients did click on the link, not to mention a series of equally squirrel-y prompts that followed that link – giving away more access to their personal account at each step along the way. The Google Docs scam that raged for some hours this week may not end up being consequential. There’s no evidence that the attackers compromised sensitive data, and Google acted quickly to hamstring the campaign.

But the incident highlights the low, low bar that scammers must clear to gain access to sensitive personal and corporate accounts, and the high bar that defenders must clear to keep users and data safe.

 

There were a thousand good reasons not to click on the Google Docs link. But lots of people did any way. Why?

First, that email. Mine arrived on the afternoon of May 3rd. As a journalist, I was in one of the early waves of recipients. At this point, I was unaware of the rapidly circulating scam campaign. So I had no foreknowledge that scammy Google Docs invites were making the rounds. No matter. There’s no way I would be nipping at this phish bait anyway. Here’s why:

I don’t know who sent it to me

The first reason I was suspicious of this email was that I didn’t know the sender. As it turns out, in my case he was a Division 1 college track coach and former Olympic distance runner. As a runner, I’m super impressed by this guy’s resume and accomplishments, which I have since researched. But the fact remains that on Wednesday, I didn’t know him from Adam and had no reason to be communicating with him.  Strike 1.

Not all recipients had that experience – some received messages from people they did know, which maybe is excusable. Except…

The email wasn’t actually sent directly to me

Even if I wasn’t curious about or confused by the identity of the sender, the email I received wasn’t sent directly to me. Instead, it was sent to an anonymous, mailinator email address that consisted of a string of “h”s – like the elite hacker who sent this email was too bored or distracted to think up a halfway believable recipient and just held down the “h” key. I was one of an untold number of “BCC” addresses, apparently. Please.

There was no context

I’m a reporter, so I receive a lot of inbound email correspondence each day, as well as the other spam that all of us get. Hundreds of emails – literally. The vast majority go into a low priority queue where they eventually get deleted (sorry). A small percentage – maybe 10% or 20% – get marked as high priority and I read them. Most of those messages get deleted, also. But I read them.

This email actually made it into that tier of my inbox, which is surprising to me given what a scammy and content-neutral message it was. Potentially the Google Docs link in it had some bearing on that. If so, Google should think about re-evaluating its inbox filter. Nevertheless, the email had no message to speak of – no context, no lure to click the enclosed Google Doc link. Just the invitation to view it. Again – highly suspicious and not at all enticing. Pass.

It was asking me to click on something

“View this document,” “Check out this web page,” “Visit our Facebook Page,” “Did you see this hilarious video?” “ZOMG this is nuts! Check it out!!!” “Hey, you’re sexy. Want to chat?”

Any message that asks me, as a recipient, to click on anything is inherently suspicious, even when it is sent from someone I know. If it is sent from someone I have never heard of or engaged with in any way, consider it dead on arrival. Sorry.

The really troubling thing

But here’s the really troubling thing. Let’s say that you were tired or bored on Wednesday and you clicked that Google Docs link in the email. The truth is: that still wouldn’t have been enough to compromise your account. That’s because the scam relied on an additional component: a malicious web application hosted on some phishy domains (g-cloud(dot)win, docscloud(dot)info and so on). The app invoked Google’s OAuth service to request access to your email contacts. Don’t respond to the OAuth request to select your Google account, or the subsequent request to give the malicious application access to your account to read, send, delete, and manage your emails and manage your contacts, and you’ve derailed the attack.

Alas, despite the sea of red flags: the squirrel-y email, the phishy domain, the mysterious app with a hunger for permissions, thousands of Google users fell for the scam. The security firm Zscaler saw more than 10,000 hits in two hours to the domains used in the attack. Those aren’t huge numbers, measured against Google’s massive user base. But they’re a plenty-big foothold for attackers, especially given the low level of effort put into making this attack convincing or deceptive

Lessons learned

There are any number of takeaways from the Google Docs scam. As this piece at Ars Technica noted, it revealed weaknesses in Google’s implementation of OAuth, which can make it easy for malicious actors with a modicum of development skills to create applications that disguise their true source. Google also appears to have overlooked the possibility of this kind of application spoofing attack, despite being warned about it more than five years ago.

That may be the real take-away from this attack – not the complexities of OAuth or web applications, but the simple fact that so many Internet users continue to readily click on links, download suspicious applications and grant wide ranging permissions to unknown strangers to peruse their information.

Security experts will tell you that phishing attacks, in various forms, are the first step in just about every successful cyber attack – sophisticated or not. And for sophisticated attacks, targeted “spear” phishing attacks are used almost without exception.

And that’s because they work. Verizon found in the latest iteration of its annual Data Breach Investigation Report that social (or social engineering) attacks were used in 43% of all breaches during 2016 with phishing the most common social tactic, used in 93% of social engineering attacks.

Around 7% of phishing attacks were successful, on average, with success rates of as high as 13% depending on the industry targeted. Scaled to hundreds, thousands or tens of thousands of targets, that success rate provides plenty of opportunities to access sensitive assets and data.

The solution(s) to these “social” attacks aren’t easy to implement. Training can help reduce the number of people who fall for phishing attacks. But, as the Google Docs incident suggests, there’s lots of work to do when it comes to informing people of suspicious incidents.

But that will only get you so far. Data from Symantec shows that 1 in 131 emails contained a malicious link or attachment in 2016 – the highest rate in five years. That means employees are likely to encounter malicious content regularly and, over time, will interact with it in some way.

For some, the attackers reliance on Google Docs is a good sign. Travis Smith, a senior security research engineer at Tripwire, noted that attackers’ decision to leverage Google Docs and spoofing an official application may suggest that users are getting more savvy.

“For those that are trained to validate the link before clicking on it, (the Google Docs scam) passes two of the common techniques the majority of internet users are trained to not click on every link they come across (does it come from someone you trust and validate the link is going to a trusted source),” he said in an email statement.

That may be true. But the willingness of users to look past so many red flags in the construction of the phishing email is surely a worrying sign.

The bigger fix is finding ways to insulate our employees, customers and selves from the consequences from bad, inadvisable but entirely human mistakes: segregating sensitive data, limiting the ability of applications to claim verbose permissions on end user systems and identifying suspicious or malicious resources like domains and servers that are used in attacks.

Comments are closed.