Just a little bit of exposed personal data can go a long way for a hacker

Hackers today use our exposed personal data against us. More than 90% of the time, cyberattacks are specifically crafted from users’ public data. To a hacker and to cyber specialists in general, this exposed, publicly available information is known as OSINT, or Open-Source Intelligence. OSINT can be any publicly available information a hacker can find on a target, such as data from LinkedIn, Instagram, and other social media sites, data brokers, breach repositories, and elsewhere. Hackers use this data to craft and power social engineering attacks. It is the data that tells the attacker who is a vulnerable and valuable target, how best to contact them, how to establish trust, and how ultimately to trick, coerce, or manipulate them. Social engineering attacks fool people into performing a desired action and criminals use social engineering to lure targets into handing over personal information, opening malicious files, or granting access to sensitive data.

In this post, we highlight some of the ways in which bad actors use our information in social engineering campaigns. Understanding the various ways in which even a limited amount of exposed personal information can be weaponized by social engineers can help us not only become more vigilant and cautious but will hopefully also motivate us to take proactive measures to protect ourselves and our companies before attacks happen.

Hackers need—and harvest!—personal information to craft attacks

In order to identify, choose, and plan attacks against potential targets, threat actors must first conduct OSINT reconnaissance. Hackers have a variety of tools that automate this process. They begin by searching for information and selecting a vulnerable target, and then using the target’s data to create a compelling story that will trick them. The social engineer uses one of several means, such as an email, social media, or a phone call, to contact the target and establish trust. If the communication is convincing enough, the victim will be fooled and unwittingly click a malicious link or give the attacker sensitive information that will be used against them or their company. 

On account of the essential role that public data plays in social engineering attacks, it behooves us to be aware of, and especially limit, the amount of personal information we share online. The larger our digital footprint is, the larger our attack surface is and the more visible we are to social engineers. The more information attackers have on a target, the easier it is for them to craft convincing, and ultimately successful, social engineering attacks. The less visible we are, the less attractive we are to hackers and the less paths to compromise there are to be exploited.

While deleting oneself entirely from the internet in the 21st century is not viable, by carefully manicuring what you share and with whom you share it, you can significantly reduce your visible attack surface and prevent social engineering attacks.

Even a little bit of exposed information can be dangerous

Hackers don’t need much personal information to wreak havoc on your life. They can do a significant amount of damage with just your cell phone number. Typing your number into a people search site, for instance, can reveal your personal information to an attacker in just a few seconds. This information can then be used for social engineering, identity theft, doxing, or other malicious actions, such as taking over your email and other accounts. 

With only your phone number, a hacker can easily determine your email address. They can then contact your mobile provider and claim to be you, route your number to their phone, log into your email, click ‘forgot password,’ and have the reset link sent to them. Once they have your email account, all of your other accounts are potentially vulnerable. This is one reason to avoid using the same username and password across multiple accounts! 

Once acquired, a hacker could also decide to ‘spoof’ your phone number. This makes your number appear on a caller ID even though it is not you. Using this method, a bad actor can impersonate you to trick one of your friends or colleagues, or call you from a spoofed number, one that you may recognize or trust, in an attempt to socially engineer you or to record your voice for use in another scam.

The fact that a hacker can do so much with just a limited amount of information should make us think twice about what we share publicly, even if it’s only our phone number. To see some of your exposed personal data, get your free report below.

GET YOUR FREE REPORT

See your exposed personal data

Exposed data and credential compromise

Hackers can also do a lot of damage with exposed login credentials. Usernames, email addresses, and corresponding passwords become available on the dark web (and the public web!) once they have been involved in a data breach. You can find out if your personal data has been compromised in a breach by checking haveIbeenpwned.com, for example. Whenever this type of information gets exposed, it can leave users vulnerable to credential compromise.

Credential compromise, also known as ‘credential stuffing,’ happens when an attacker obtains a list of breached username and password pairs (“credentials”) from the dark web and then uses automated scripts or ‘bots’ to test them on dozens or even hundreds of website login forms with the goal of gaining access to user accounts. There are massive lists of breached credentials available to hackers on the black market and, since most people reuse passwords across different accounts, it is inevitable that some of these credentials will work on other accounts, either personal or corporate.

Once hackers have access to a customer account through credential stuffing, they can use the account for various nefarious purposes such as stealing assets, making purchases, or obtaining more personal information that can be sold to other hackers. If the breached credentials belong to an employee, the hacker can use that access to compromise a company’s systems and assets. 

Since credential compromise relies on the reuse of passwords, avoiding the reuse of the same or similar passwords across different accounts is critical. Always use strong passwords that are difficult to guess and change them frequently. Additionally, using multi-factor authentication, which requires users to authenticate their login with something they physically have and something they personally know, is a good defense against credential stuffing since an attacker’s bots cannot replicate this validation method. 

Recent real-world examples reveal the dangers of exposed personal data for companies

Companies should be especially wary of the role exposed personal data of employees plays in cyberattacks. Three recent examples that made headlines highlight how just a limited amount of exposed employee information can be used to craft a successful social engineering campaign and breach organizations. 

Twilio and Cloudflare

In August, hackers targeted two security-sensitive companies, Twilio and Cloudflare, as part of a larger ongoing campaign dubbed “Oktapus” that ultimately compromised more than 130 organizations and netted the attackers nearly 10,000 login credentials. In the case of Twilio, the hackers began by cross referencing employee public data from Twilio’s LinkedIn roster (the starting point of most attacks) against existing exposed 3rd party breach data sets (e.g., haveibeenpwnd.com) and data broker data (e.g., white pages). This gave the attackers a list of personal information of employees to target. The hackers then created a fake domain and login page that looked like Twilio’s (twilio-sso.com or twilio-okta.com). Using the acquired personal data, they then sent text messages to employees, which appeared as official company communications. The link in the SMS message directed the employees to the attackers’ fake landing page that impersonated their company’s sign-in page. When the employees entered their corporate login credentials and two-factor codes on the fake page, they ended up handing them over to the attackers, who then used those valid credentials on the actual Twilio login page to access the systems illegally. 

exposed personal data

Although Cloudflare was also targeted in this way, they were able to stop the breach through their use of FIDO MFA keys. Even though they were able to keep the attackers from accessing their systems through advanced security practices, Cloudflare’s CEO, senior security engineer, and incident response leader stated that “This was a sophisticated attack targeting employees and systems in such a way that we believe most organizations would be likely to be breached.”

Indeed, the exposed personal data used to power the Oktapus attacks shows how dangerous even a small amount of public data can be in the hands of a social engineer.

Cisco 

In another example from May of this year, the corporate network of multinational security company Cisco was breached by hackers with links to both the Lapsus$ and Yanluowang ransomware gangs. In this case, the hackers acquired the username or email address of a Cisco employee’s Google account along with the employee’s cell phone number. They targeted the employee’s mobile device with repeated voice phishing attacks with the goal of taking over the Google account. The employee was using a personal Google account that was syncing company login credentials via Google Chrome’s password manager. The account was protected by multi-factor authentication (MFA), however, so the hackers posed as people from the technical support departments of well-known companies and sent the employee a barrage of MFA push requests until the target, out of fatigue, finally agreed to one of them. This gave the attackers access to the Cisco VPN through the user’s account. From there the attackers were able to gain further access, escalate privileges, and drop payloads before being slowed and contained by Cisco. The TTPs (techniques, tactics, and procedures) used in the attack were consistent with pre-ransomware activity.

Uber 

Most recently, the ride-hailing company Uber was breached by a hacker thought to be linked to the Lapsus$ group, who gained initial access by socially engineering an Uber contractor. The attacker had apparently acquired the corporate password of this contractor on the dark web after it had been exposed through malware on the contractor’s personal device. The attacker then repeatedly tried to login to the contractor’s Uber account, which sent multiple two-factor login approval requests to the contractor’s phone.  Finally, the hacker posed as Uber IT and sent a message asking the contractor to approve the sign-in. After successfully exhausting the contractor, the approval was granted, and this provided the hacker with the valid credentials needed to gain access to Uber’s VPN. Once inside, the hacker found a network share that had PowerShell scripts. One of these scripts contained admin credentials for Thycotic [a privileged access management solution]. Once the hacker had access to this, he was able to get access to all other internal systems by using their passwords. 

The Uber hack is a prime example of how, with only a limited amount of exposed personal data and some social engineering, a hacker can easily trick, manipulate, or coerce a human and compromise a company’s systems. See our key takeaways and remediation recommendations.

Limiting exposed personal data to prevent attacks

The examples provided here illustrate some of the common ways our personal information can be successfully weaponized by today’s hackers. It is now more urgent than ever for people and companies to know and manage their exposed public information proactively to help prevent attacks. Attackers are opportunists who care about their ROI. By limiting exposed personal data, it becomes more difficult and therefore more expensive for threat actors to succeed in social engineering attacks. Companies that recognize this fact pattern and take action to protect their employees will be more likely to avoid expensive and damaging breaches.

FOR LAPSUS$ SOCIAL ENGINEERS, THE ATTACK VECTOR IS DEALER’S CHOICE

By Matt Polak, CEO of Picnic

Two weeks ago, at a closed meeting of cyber leaders focused on emerging threats, the group agreed that somewhere between “most” and “100%” of cyber incidents plaguing their organizations pivoted on social engineering. That’s no secret, of course, as social engineering is widely reported as the critical vector in more than 90% of attacks.

LAPSUS$, a hacking group with a reputation for bribery and extortion fueled by a kaleidoscope of social engineering techniques, typifies the actors in this emerging threat landscape. In the past four months, they’ve reportedly breached Microsoft, NVIDIA, Samsung, Vodafone and Ubisoft. Last week, they added Okta to the trophy case.

For the recent Okta breach, theories abound about how the specific attack chain played out, but it will be some time before those investigations yield public, validated specifics. 

As experts in social engineering, we decided to answer the question ourselves—with so many ways to attack, how would we have done it? Our thoughts and findings are shared below, with some elements redacted to prevent malicious use.

How Targeted was this Social Engineering Attack?

To start, we know that Okta’s public disclosure indicates the attacker targeted a support engineer’s computer, gained access, installed software supporting remote desktop protocol (RDP) and then used that software to continue their infiltration:

“Our investigation determined that the screenshots…were taken from a Sitel support engineer’s computer upon which an attacker had obtained remote access using RDP…So while the attacker never gained access to the Okta service via account takeover, a machine that was logged into Okta was compromised and they were able to obtain screenshots and control the machine through the RDP session.”

For attackers to successfully leverage RDP, they must:

  1. Be able to identify the location of the target device—the IP address.
  2. Know that the device can support RDP—Windows devices only.
  3. Have knowledge that RDP is exposed—an open RDP port is not a default setting.

Let’s take a look at each of these in more detail: 

How Can an Attacker Identify Target Devices to Exploit RDP? 

Sophisticated attackers don’t “boil the ocean” in the hope of identifying an open port into a whale like Okta—there are 22 billion connected devices on the internet. In fact, LAPSUS$ is a group with a history of leveraging RDP in their attacks, to the point that they are openly offering cash for credentials to the employees of target organizations if RDP can be installed—quite a shortcut. 

Putting aside the cultivation of an insider threat, attackers would rightly assume a company like Okta is a hard target, and that accessing it via connected third parties would be an easier path to success.

Our team regularly emulates sophisticated threat actor behaviors, so we started by mapping the relationships between Okta and different organizations, including contractors and key customers. Cyber hygiene problems are often far worse for large organizations than individuals, and our methods quickly uncovered data that would be valuable to threat actors. For example, Okta’s relationships with some suppliers are detailed here, which led us to information on Sitel / Sykes in this document. Both are examples of information that can be directly weaponized by motivated attackers.

Two killer insights from these documents:

  1. Sykes, a subsidiary of Sitel, provides external technical support to Okta. 
  2. Sykes uses remote desktop protocol as a matter of policy.

This information makes an attacker’s job easier, and would be particularly interesting to a group like LAPSUS$—an RDP-reliant contractor with direct access to Okta’s systems is a perfect target.

Recon 101: Exploit Weak Operational Security Practices

With a target company identified, we ran a quick search of LinkedIn to reveal thousands of Sitel employees discussing different levels of privileged access to their customer environments. These technical support contractors are the most likely targets of attacks like the ones catching headlines today. Despite the investigation and negative publicity associated with this attack, more than a dozen Sitel employees are still discussing privileged access in the context of their work with Okta (nevermind the dozens of other companies). 

Now that we have defined this group, our focus narrows to deep OSINT collection on these individuals—an area where Picnic has substantial expertise. OSINT stands for open-source intelligence, and it is the process by which disparate pieces of public information are assembled to create a clear picture of a person’s life, a company, a situation, or an organization. Suffice to say that our standard, automated reconnaissance was sufficient to craft compelling pretext-driven attacks for most of our target group. 

To cast this virtual process in a slightly different light, imagine a thief casing your neighborhood. Good thieves spend weeks conducting reconnaissance to identify their targets. They walk the streets and take careful notes about houses with obscured entryways, unkempt hedges, security lights and cameras, or valuables in plain sight. 

Social engineers are no different: they are essentially walking around the virtual world looking for indicators of opportunity and easy marks.  

Before we explore how to go from reconnaissance to the hardware exploit, let’s recap:

  1. We are emulating threat actor behaviors before Okta’s breach.
  2. We conducted organizational reconnaissance on our target: Okta.
  3. We identified a contractor likely to have privileged access to the target: Sitel.
  4. We narrowed the scope to identify people within Sitel who could be good targets.
  5. We further narrowed our focus to a select group of people that appear to be easy targets based on their personal digital footprints.

All of this has been done using OSINT. The next steps in the process are provided as hypothetical examples only. Picnic did not actively engage any of the identified Sitel targets via the techniques below—that would be inappropriate and unethical without permission. 

Identifying the Location of the Device for RDP Exploit

There are three ways that attackers can identify the location of a device online: 

  1. Pixel tracking
  2. Phishing
  3. OSINT reconnaissance

Just as we conducted OSINT reconnaissance on people and companies, the same process is possible to identify the location of the target device. By cross-referencing multiple sources of information such as data breaches and data brokers, an attacker can identify and leverage IP addresses and physical addresses to zero in on device locations. This is always the preferred approach because there is no risk that the attacker will expose their actions. 

Pixel tracking is a common attacker (and marketer!) technique to know when, and importantly where, an email has been opened. For the attacker, this is an easy way to identify a device location. Phishing is similar to pixel tracking: a clicked link can provide an attacker with valuable device and location intelligence, but pixel tracking only requires that an image be viewed in an email client. No clicks necessary. 

Pixel tracking and phishing are examples of technical reconnaissance that were more easily thwarted pre-COVID, when employees were cocooned in corporate security layers. With significant portions of knowledge workers still working at home, security teams must contend with variable and amorphous attack surfaces.

For social engineers, this distribution of knowledge workers is an asymmetric advantage. Without a boundary between work-life and home-life—the available surface area on which to conduct reconnaissance and ply attacks is essentially doubled.

Social engineering’s role in the RDP exploit

According to Okta’s press release, an attacker “obtained remote access using RDP” to a computer owned by Sitel. Based on threat actor emulation conducted by our team and the typical LAPSUS$ approach, it is clear that social engineering played a key role in this attack, which was likely via a targeted spear phishing campaign, outright bribery, or similar delivery mechanism, which would have provided attackers not only with device location information needed for the RDP exploit, but also important information about the device and other security controls. 

Remember that social engineers are hackers that focus on tricking people so they can defeat technical controls. Tricking people is easy when you know something personal about them—in fact, our research indicates attackers are at least 200x more likely to trick their targets when the attack leverages personal information. 

The amount of time, energy, and resources required to complete this reconnaissance was significant, but it was made easier by the two key documents found during our initial recon on the target. While there are other breadcrumbs that could have led us down the same path, many of those paths offered less clear value, while these two documents essentially pointed to “easy access this way.” Finding these documents quickly and easily means that hackers are likely to prioritize this attack path over others—the easier it is, the less time and resources it consumes, and the greater the return on effort. 

Key learnings for cyber defenders

Recognize you are at war. Make no mistake about it, we are in a war that is being fought in cyberspace, and unfortunately companies like Okta and Sitel are collateral damage. Just as in a hot war, one of the most successful methods for countering insurgent attacks is to “turn the map around” to see your defenses from the perspective of the enemy. This outside-in way of thinking offers critical differentiation in the security-strategy development process, where we desperately need to change the paradigm and take proactive measures to stop attacks before they happen. I wrote another short article about how to think like an attacker that might be helpful if you are new to this approach.

Be proactive and use MITRE—all of it. The prevailing method used by cyber defenders to map attacker techniques and reduce risk is called the MITRE ATT&CK framework. The design of the framework maps fourteen stages of an attack from the start (aptly called Reconnaissance) through its end (called Impact)—our team emulated attacker behaviors during the reconnaissance stage of the attack in this example. Cyber defenders are skilled at reacting to incidents mainly because legacy technologies are reactive in nature. MITRE recommends a proactive approach to remediating the reconnaissance stage to “limit or remove information” harvested by hackers. Defenders have an opportunity to be proactive and leverage new technologies that expand visibility and proactive remediation beyond the corporate firewall into the first stage of an attack. Curtailing hacker reconnaissance by removing the data hackers need to plan and launch their attack is the best practice according to MITRE. 

Get ahead of regulations. Federal regulators are also coming upstream of the attack and have signaled a shift with new SEC disclosure guidance, which requires companies to disclose cybersecurity incidents sooner. Specifically, one key aspect of the new rule touches on “…whether the [company] has remediated or is currently remediating the incident.” New technologies that emulate threat actor reconnaissance can make cyber defenders proactive protectors of an organization’s employees, contractors, and customers long before problems escalate to front page news. These new technologies allow companies to remediate risk at the reconnaissance stage of the attack—an entirely new technology advantage for cyber defenders. 

Every single attack begins with research. Removing the data that hackers need to connect their attack plans to their human targets is the first and best step for companies who want to avoid costly breaches, damaging headlines, and stock price shocks.

An ocean of data…and of ears

How much data is produced every day? A quick Google search will tell you the current estimate stands at 2.5 quintillion bytes. For those of us that don’t know the difference between our zettabytes and yottabytes, that’s 2.5 followed by a staggering 18 zeros! Basically, the simple answer is a lot. A lot of data is produced and collected every day – and it is growing exponentially.

It might be hard to believe but the vast majority of the world’s data has been created in the last few years. Fueled by the internet of things and the perpetual growth of connected devices and sensors, data continues to grow at an ever-increasing rate as more of our world becomes digitized and ‘datafied’. In fact, IDC predicts the world’s data will grow to 175 zettabytes by 2025. It’s mind-boggling to think that humans are generating this, particularly when looked at in the context of one day. Or is it?

Data captured and stored daily includes anything and everything from photos uploaded to social media from your latest vacation, to every time you shout at your Google Home or Amazon Echo to turn on the radio or add to the shopping list, even information gathered by the Curiosity rover currently exploring Mars. Every digital interaction you have is captured. Every time you buy something with your contactless debit card? Every time you stream a song, movie or podcast? It’s all data. When you walk down the street or go for a drive, if you’ve a digital device, whether is your smartphone, smartwatch, or both – more data.

The majority of us are aware, possibly apathetic, that this data is collected by companies – but what might be more pernicious is the number of listeners out there and the level of granular engagement that is tracked. From device usage to Facebook likes, Twitches, online comments, even viewing-but-skipping-over a photograph in your feed, whether you swipe left or right on Tinder, filters you apply on selfies – this is all captured and stored. If you have a Kindle, Amazon knows not only how often you change a page but also whether you tap or swipe the screen to do so. When it comes to Netflix, yes, they know what you have watched but they also capture what you search for, how far you’ve gotten through a movie and more. In other words, big data captures the most mundane and intimate moments of people’s lives.

It’s not overly surprising that companies want to harvest as much about us as possible because – well, why wouldn’t they? The personal information users give away for free is transformed into a precious commodity. The more data produced, the more information they have to monetize, whether it’s to help them target advertisements at us, track high-traffic areas in stores, show us more dog videos to keep us on their site longer, or even sell to third parties. For the companies, there’s no downside to limitless data collection.

Data management: Data protection is weak

The nature of technology evolution is that we moved from ephemeral management of data to permanent management of data. The driver of that is functionality. On the one hand, the economics of the situation make it so that there is very little cost to storing massive amounts of data. However, what of the security of that data – the personal, the mundane, the intimate day-to-day details of our lives that we in some cases unwillingly impart?

Many express concerns about Google, Facebook and Amazon having too much influence. Others believe it matters not what information is collected but what inferences and predictions are made based upon it. How companies can use it to exert influence like whether someone should maintain their health care benefits, or be released on bail – or even whether governments could influence the electoral – Cambridge Analytica, I hear you shout. However, while these are valid concerns, what should be more troubling is the prospect of said personal data falling into the wrong hands.

Security breaches have become all too common. In 2019, cyber-attacks were considered among the top five risks to global stability. Yahoo holds the record for the largest data breach of all time with 3 billion compromised accounts. Other recent notable breaches include First American Financial Corp. who had 885 million records exposed online including bank transactions, social security numbers and more; and Facebook saw 540 million user records exposed on the Amazon cloud server. However, they are certainly not alone sitting atop a long list of breaches. Moreover, while it is certainly easier to point the finger in the direction of hackers, well-known brands including Microsoft, Estee Lauder and MGM Resorts have accidentally exposed data online – visible and unprotected for any and all to claim.

COVID-19 has only compounded the issue, providing perfect conditions for cyberattacks and data breaches. By the end of Q2, 2020 it was said to be the “worst year on record” in terms of total records exposed. By October, the number of records breached had grown to a mind-boggling 36 billion.

Brands and companies – mostly – do not have bad intentions. They are guilty of greed perhaps, but these breach examples highlight how ill-prepared the industry is in protecting harvested data. The volume collected along with often lack-luster security provides easy pickings for exploitation. In the wrong hands, our seemingly mundane data can be combined with other data streams to provide ammunition to conduct an effective social engineering campaign. For example, there is a lot of information that can be “triangulated” about you that may not be represented by explicit data. Even just by watching when and how you behave on the web, social engineers can determine who your friends and associates are. Think that doesn’t mean much? That information is a key ingredient to many kinds of fraud and impersonations.

One could postulate that the progress of social engineers should not be thought of merely as an impressive technological advancement in cybercrime. Rather these criminals have peripherally benefitted from every other industry’s investment in data harvesting.

Data management: Rethinking data exposure

We give up more data than we’ll ever know. While it would be nearly impossible, if not unrealistic, to shut down this type of collection completely, we need to rethink how much we unwittingly disclose to help reduce the risk of falling foul to cybercrime.

Cybercrime awareness is no longer enough to reduce risk

People’s perceptions have changed. Not so long ago we thought nothing of kids playing outside all day alone, unchaperoned visits to a friend’s house, walking to school alone – the list goes on. But as times have changed, we have become much more vigilant about personal safety. The same can be said for the online world. The majority of us are well-aware of cybercrime and are generally on our guard for suspicious emails and websites. Yet despite this everyday vigilance, social engineers find ways to take advantage of our online behavior.

Cybercrime: We are already suspicious

When it comes to business IT security, company leaders generally want to establish a strong cybersecurity culture within their organizations. It’s a very natural thing to do. Human resources department training typically focuses on awareness and highlights typical mistakes that open the doors to a business’ systems and data. It shines a spotlight on what it means to be aware. But conducting security awareness training is not enough to reduce risk completely. Why? The truth is that most people are already “cyber aware.” We have all already formed an opinion on cybersecurity, and whom we trust.

Just think about it. How often do you hear a knock on the door these days, except from an unexpected visitor? A generation ago, a ringing doorbell was nearly cause for celebration. Everyone in the house leaped into action in near perfect unison. But people’s attitudes have changed. We are now not just suspicious, but actually distrustful, of people knocking on our door. We are conscious that not everyone who calls to the door nowadays is legit. It’s born out of the fact that we are aware of the many door-to-door scams or have been a victim of a cold caller ourselves. Besides, due to smartphones, we already know in advance if someone is dropping by – anyone else is considered an uninvited caller. In this way, the escalation of increasingly invasive marketing and social networking manipulation, coupled with technology that makes us easier to track and easier to target, has driven a culture-wide sense of security awareness.

The same can be said for cybersecurity. Nearly everyone is aware of the classic Nigerian 401 scam. In return for a few thousand dollars, email recipients are guaranteed several million in return. Word spread already years ago that this, and many others like it, was a scam; and people now ignore such basic scams out of habit. Like the bogus salesmen calling to the door, we already have a heightened sense of awareness, causing us to be more cautious.

Cybersecurity training: Awareness alone doesn’t solve the problem

There is no question that awareness of cybersecurity is high now and has been for a couple of years – and that’s a good thing. The problem is that while cyber security training within an organization is well intentioned, it is solely invested in creating awareness. At this point, however, we are way past awareness. People are already suspicious of bogus email, SMS messages and calls.

The real focus should be on personal attack surface, e.g. the aforementioned data that makes us easier to track and to target. Attention needs to be given to the significance of personal information, the sharing of it and how to defend it. While we are “aware” cybercrime exists, many of us may not fully understand the implications of actions that open the door to cybercrime. This is partially why social engineering and other large-scale data breaches are often so successful – and you only need to look at the stats.

A 2017 Tenable survey found that nearly all participants were aware of security breaches. What the survey also revealed was that many admitted to not taking some degree of precaution to protect their personal data and have not changed their security habits in the face of a public threat. Not surprisingly, another study from Stanford University and security firm Tessian revealed that nine in ten (88%) data breach incidents are caused by employees’ mistakes – and costly ones at that. In 2020 alone, data breaches cost businesses an average of $3.86 million.

So, what, in light of this, are the best steps to start mitigating risk?

Reduce Employee Burden: Recognition of a person’s attackable surface

When it comes to reducing risk through employee training, businesses need to recognize that many people fall into one of two categories:

  1. There are those who are very concerned about personal data security. This cohort want to keep their data safe and do not want anyone “messing” with their personal information. They are already very much engaged with cybersecurity – they are not the problem.
  2. Then there are those who are the reverse. They are not interested in cyber security. They are aware but they don’t feel at risk, and as such are not willing to spend effort on it.

Trying to “convert” the second group of employees to become champions of cyber hygiene or cybersecurity can be, for a want of a better phrase, a waste of time. Until you can put cybersecurity into personal terms for each person, it is nearly impossible to change entrenched habits and opinions.

However, if you can pinpoint which extra-professional avenues of attack are most likely for an individual’s data profile, you may be able to make progress against this skepticism. It’s about recognition of a person’s attackable surface. Concern for one’s own personal safety will always trump concerns for company safety. Or, put in analog terms, you don’t have to convince suspicious people not to answer the phone; you need to convince them not to publish their phone number in the first place. The smarter everyone is about his or her personal data, the more secure the company will be.

Security awareness training is a common corporate exercise – but is no longer enough to reduce risk. By empowering your employees to safeguard their own digital footprints – along with company data – you can start to develop really formidable foes to cybercrime.