An electric utility company takes cybersecurity beyond the perimeter

The challenge


This client, like most utilities, possesses a strong culture of safety and a similar commitment to security. As a utility, it also operates in one of the 16 sectors designated by the US Cybersecurity and Infrastructure Security Agency (CISA) as part of the United States’ critical infrastructure. This means that the organization faces a specific set of requirements, which include disciplined cybersecurity practices.

Traditional cybersecurity has focused mainly on the internal environment and on data layers within the organization. For that reason, the organization sought a solution that expanded the purview and practice of cybersecurity beyond its walls. Management felt the need to identify and address vulnerabilities in the data “out there,” where more than 90% of cyberattacks now originate.

They also wanted an external perspective to support an outside-in approach to security. They wanted to know how malicious actors could gather information about users to mount an attack on the company. What could those actors find on social media profiles and what messages could they use to launch socially engineered attacks? What could they learn about the organization’s hardware and software and its methods of authentication? What could they learn about its supply chain: What products does it buy? From whom does it buy these products? How does it pay its vendors? What could attackers learn about the leadership team, the Board, employees, investors, and other stakeholders that would make the organization vulnerable to attacks?

Another goal was to broaden the conversation about cybersecurity within the organization. Given the exposures that can be unwittingly created by users with legitimate access to the organization’s systems, leaders had come to see that cybersecurity is everyone’s responsibility. They also wanted to go beyond simply training and coaching people on how to “be careful” when using their laptops and devices; they wanted easy-to-use tools to support users’ efforts to keep systems secure.

Before learning about Picnic, the security team had worked to understand which publicly available data could create vulnerabilities and, to address reputational risk, what people were saying about the company. Yet these efforts were ad hoc, such as monitoring social media feeds, and they employed few tools, such as customized scripts and open-source tools. They wanted to harness data science to see across the internet and to identify the controls they really needed to have in place.

In sum, the security team realized that their environment lacked a defined perimeter, which meant that firewalls, endpoint protection tools, and role-based access controls could no longer provide the needed level of security.

The solution

Picnic provided both ease of enrollment for employees and tools that enabled employees to easily remove publicly available data on themselves.

Picnic’s capabilities let a user simply agree to be deleted from multiple sources of public data gathering, which Picnic handled for both the user and the organization.

The Picnic Command Center enabled analysts from the security operations team to seek out types of data that expose the organization to risk. That, in turn, positioned the team to educate employees about ways in which an attacker could use a particular type of information against themselves or the company. This created a clear division of responsibility: The organization flagged the risks while the employees controlled the data they deleted or left up.

The organization presented Picnic as a benefit to employees, which it is. Although other identity protection tools are presented that way, they are primarily geared to post-event remediation. In contrast, Picnic enables each employee to identify and deal with their publicly available data in private, so they can lower their individual risk, and by extension risk to the organization. Each employee gets to make changes dictated by their own preferences rather than their employer’s. With information from Picnic, they were able to, for example, adjust the privacy settings on their social media accounts so that only specific family members and friends can view them. Whatever steps they took reduced their exposure to attack—a benefit to them and to the organization.

Clear and consistent communications during rollout clarified both the rationale and use of the tools. Integration with the organization’s existing technology was straightforward, with Picnic tools fitting readily into existing solutions. The client/Picnic team took an agile approach to both the development methodology and operational implementation.

The impact

Picnic has assisted the security staff in identifying vulnerabilities and assisted employees in monitoring and limiting their risk exposures. The tools have provided protective controls for employees while minimizing extra steps and added work on their part. It has also helped the security staff to more effectively identify where potential threats might originate and the various forms that attacks could take.

Yet the impact of Picnic extends beyond what the platform itself does. It has enabled the security staff to launch a broader and deeper conversation about cybersecurity at the organization. This has created the opportunity to better understand, explain, and contribute to the organization’s culture of security. The security staff does not usually use the term “culture of security” with employees but the leadership team discusses it and works to create that culture. Picnic has accelerated that effort.

Picnic has also reduced burdens on the security team. It has helped to establish that everybody needs to maintain high awareness of how their social media settings or internet presence create risks. By their nature, the tools dramatically increase employee engagement in cybersecurity in ways that training sessions or video tutorials cannot.

The Picnic toolset has delivered capabilities that allow security staff to see risks outside of their corporate walls and to mitigate them. The security team can now not only alert users to the risks they face; they have also initiated new controls, such as multi-factor authentication on items that could be of use to an attacker. They have added new controls over remote access and other attack vectors where an attacker could access personal information from a data log or a compromised website. The organization is also using password reset tools that make users’ lives easier, while increasing their efficiency and effectiveness.

While no single solution can eliminate every data security issue, Picnic has broadened the organization’s view of its threat landscape and positioned it to better address risks. It has also reduced its attack surface, broadened the conversation about cyber risk and security, and delivered increased security to employees and the organization. This has occurred in the context of Picnic’s sound and sustainable methodology, process, and program for identifying and addressing social engineering threats.

1 https://www.cisa.gov/critical-infrastructure-sectors

REDTEAM RAW, EPISODE #2: Jean-Francois Maes on how he became a SANS Instructor and Offensive Cyber Security Expert (RedTeamer)

In the second episode of RedTeam Raw, Picnic’s Director of Global Intelligence, Manit Sahib, sits down with certified SANS instructor, author, researcher, consultant, and rock star RedTeamer Jean-François Maes, known on Twitter as @Jean_Maes_1994. Based in Belgium, Jean-François is the founder of redteamer.tips and is an avid contributor to the offensive security community. He is currently a security researcher at HelpSystems where he aids the Cobalt-Strike team in developing new features.

We discuss how he got into InfoSec and became a SANS instructor; the difference between pentesting, Red Teaming, and Purple Teaming; the most common ways of gaining a foothold as a RedTeamer; a RedTeam story with flowers from Jean-François; his tool Clippi-B; how he manages his time; motivations and resources for becoming a hacker; advice for getting into the industry and being able to stand out; Jean’s biggest challenge at the moment, and where he sees the industry going.

Like and subscribe for future episodes of RedTeam Raw here: https://www.youtube.com/channel/UCVn3…

FOR LAPSUS$ SOCIAL ENGINEERS, THE ATTACK VECTOR IS DEALER’S CHOICE

By Matt Polak, CEO of Picnic

Two weeks ago, at a closed meeting of cyber leaders focused on emerging threats, the group agreed that somewhere between “most” and “100%” of cyber incidents plaguing their organizations pivoted on social engineering. That’s no secret, of course, as social engineering is widely reported as the critical vector in more than 90% of attacks.

LAPSUS$, a hacking group with a reputation for bribery and extortion fueled by a kaleidoscope of social engineering techniques, typifies the actors in this emerging threat landscape. In the past four months, they’ve reportedly breached Microsoft, NVIDIA, Samsung, Vodafone and Ubisoft. Last week, they added Okta to the trophy case.

For the recent Okta breach, theories abound about how the specific attack chain played out, but it will be some time before those investigations yield public, validated specifics. 

As experts in social engineering, we decided to answer the question ourselves—with so many ways to attack, how would we have done it? Our thoughts and findings are shared below, with some elements redacted to prevent malicious use.

How Targeted was this Social Engineering Attack?

To start, we know that Okta’s public disclosure indicates the attacker targeted a support engineer’s computer, gained access, installed software supporting remote desktop protocol (RDP) and then used that software to continue their infiltration:

“Our investigation determined that the screenshots…were taken from a Sitel support engineer’s computer upon which an attacker had obtained remote access using RDP…So while the attacker never gained access to the Okta service via account takeover, a machine that was logged into Okta was compromised and they were able to obtain screenshots and control the machine through the RDP session.”

For attackers to successfully leverage RDP, they must:

  1. Be able to identify the location of the target device—the IP address.
  2. Know that the device can support RDP—Windows devices only.
  3. Have knowledge that RDP is exposed—an open RDP port is not a default setting.

Let’s take a look at each of these in more detail: 

How Can an Attacker Identify Target Devices to Exploit RDP? 

Sophisticated attackers don’t “boil the ocean” in the hope of identifying an open port into a whale like Okta—there are 22 billion connected devices on the internet. In fact, LAPSUS$ is a group with a history of leveraging RDP in their attacks, to the point that they are openly offering cash for credentials to the employees of target organizations if RDP can be installed—quite a shortcut. 

Putting aside the cultivation of an insider threat, attackers would rightly assume a company like Okta is a hard target, and that accessing it via connected third parties would be an easier path to success.

Our team regularly emulates sophisticated threat actor behaviors, so we started by mapping the relationships between Okta and different organizations, including contractors and key customers. Cyber hygiene problems are often far worse for large organizations than individuals, and our methods quickly uncovered data that would be valuable to threat actors. For example, Okta’s relationships with some suppliers are detailed here, which led us to information on Sitel / Sykes in this document. Both are examples of information that can be directly weaponized by motivated attackers.

Two killer insights from these documents:

  1. Sykes, a subsidiary of Sitel, provides external technical support to Okta. 
  2. Sykes uses remote desktop protocol as a matter of policy.

This information makes an attacker’s job easier, and would be particularly interesting to a group like LAPSUS$—an RDP-reliant contractor with direct access to Okta’s systems is a perfect target.

Recon 101: Exploit Weak Operational Security Practices

With a target company identified, we ran a quick search of LinkedIn to reveal thousands of Sitel employees discussing different levels of privileged access to their customer environments. These technical support contractors are the most likely targets of attacks like the ones catching headlines today. Despite the investigation and negative publicity associated with this attack, more than a dozen Sitel employees are still discussing privileged access in the context of their work with Okta (nevermind the dozens of other companies). 

Now that we have defined this group, our focus narrows to deep OSINT collection on these individuals—an area where Picnic has substantial expertise. OSINT stands for open-source intelligence, and it is the process by which disparate pieces of public information are assembled to create a clear picture of a person’s life, a company, a situation, or an organization. Suffice to say that our standard, automated reconnaissance was sufficient to craft compelling pretext-driven attacks for most of our target group. 

To cast this virtual process in a slightly different light, imagine a thief casing your neighborhood. Good thieves spend weeks conducting reconnaissance to identify their targets. They walk the streets and take careful notes about houses with obscured entryways, unkempt hedges, security lights and cameras, or valuables in plain sight. 

Social engineers are no different: they are essentially walking around the virtual world looking for indicators of opportunity and easy marks.  

Before we explore how to go from reconnaissance to the hardware exploit, let’s recap:

  1. We are emulating threat actor behaviors before Okta’s breach.
  2. We conducted organizational reconnaissance on our target: Okta.
  3. We identified a contractor likely to have privileged access to the target: Sitel.
  4. We narrowed the scope to identify people within Sitel who could be good targets.
  5. We further narrowed our focus to a select group of people that appear to be easy targets based on their personal digital footprints.

All of this has been done using OSINT. The next steps in the process are provided as hypothetical examples only. Picnic did not actively engage any of the identified Sitel targets via the techniques below—that would be inappropriate and unethical without permission. 

Identifying the Location of the Device for RDP Exploit

There are three ways that attackers can identify the location of a device online: 

  1. Pixel tracking
  2. Phishing
  3. OSINT reconnaissance

Just as we conducted OSINT reconnaissance on people and companies, the same process is possible to identify the location of the target device. By cross-referencing multiple sources of information such as data breaches and data brokers, an attacker can identify and leverage IP addresses and physical addresses to zero in on device locations. This is always the preferred approach because there is no risk that the attacker will expose their actions. 

Pixel tracking is a common attacker (and marketer!) technique to know when, and importantly where, an email has been opened. For the attacker, this is an easy way to identify a device location. Phishing is similar to pixel tracking: a clicked link can provide an attacker with valuable device and location intelligence, but pixel tracking only requires that an image be viewed in an email client. No clicks necessary. 

Pixel tracking and phishing are examples of technical reconnaissance that were more easily thwarted pre-COVID, when employees were cocooned in corporate security layers. With significant portions of knowledge workers still working at home, security teams must contend with variable and amorphous attack surfaces.

For social engineers, this distribution of knowledge workers is an asymmetric advantage. Without a boundary between work-life and home-life—the available surface area on which to conduct reconnaissance and ply attacks is essentially doubled.

Social engineering’s role in the RDP exploit

According to Okta’s press release, an attacker “obtained remote access using RDP” to a computer owned by Sitel. Based on threat actor emulation conducted by our team and the typical LAPSUS$ approach, it is clear that social engineering played a key role in this attack, which was likely via a targeted spear phishing campaign, outright bribery, or similar delivery mechanism, which would have provided attackers not only with device location information needed for the RDP exploit, but also important information about the device and other security controls. 

Remember that social engineers are hackers that focus on tricking people so they can defeat technical controls. Tricking people is easy when you know something personal about them—in fact, our research indicates attackers are at least 200x more likely to trick their targets when the attack leverages personal information. 

The amount of time, energy, and resources required to complete this reconnaissance was significant, but it was made easier by the two key documents found during our initial recon on the target. While there are other breadcrumbs that could have led us down the same path, many of those paths offered less clear value, while these two documents essentially pointed to “easy access this way.” Finding these documents quickly and easily means that hackers are likely to prioritize this attack path over others—the easier it is, the less time and resources it consumes, and the greater the return on effort. 

Key learnings for cyber defenders

Recognize you are at war. Make no mistake about it, we are in a war that is being fought in cyberspace, and unfortunately companies like Okta and Sitel are collateral damage. Just as in a hot war, one of the most successful methods for countering insurgent attacks is to “turn the map around” to see your defenses from the perspective of the enemy. This outside-in way of thinking offers critical differentiation in the security-strategy development process, where we desperately need to change the paradigm and take proactive measures to stop attacks before they happen. I wrote another short article about how to think like an attacker that might be helpful if you are new to this approach.

Be proactive and use MITRE—all of it. The prevailing method used by cyber defenders to map attacker techniques and reduce risk is called the MITRE ATT&CK framework. The design of the framework maps fourteen stages of an attack from the start (aptly called Reconnaissance) through its end (called Impact)—our team emulated attacker behaviors during the reconnaissance stage of the attack in this example. Cyber defenders are skilled at reacting to incidents mainly because legacy technologies are reactive in nature. MITRE recommends a proactive approach to remediating the reconnaissance stage to “limit or remove information” harvested by hackers. Defenders have an opportunity to be proactive and leverage new technologies that expand visibility and proactive remediation beyond the corporate firewall into the first stage of an attack. Curtailing hacker reconnaissance by removing the data hackers need to plan and launch their attack is the best practice according to MITRE. 

Get ahead of regulations. Federal regulators are also coming upstream of the attack and have signaled a shift with new SEC disclosure guidance, which requires companies to disclose cybersecurity incidents sooner. Specifically, one key aspect of the new rule touches on “…whether the [company] has remediated or is currently remediating the incident.” New technologies that emulate threat actor reconnaissance can make cyber defenders proactive protectors of an organization’s employees, contractors, and customers long before problems escalate to front page news. These new technologies allow companies to remediate risk at the reconnaissance stage of the attack—an entirely new technology advantage for cyber defenders. 

Every single attack begins with research. Removing the data that hackers need to connect their attack plans to their human targets is the first and best step for companies who want to avoid costly breaches, damaging headlines, and stock price shocks.

Cybercrime awareness is no longer enough to reduce risk

People’s perceptions have changed. Not so long ago we thought nothing of kids playing outside all day alone, unchaperoned visits to a friend’s house, walking to school alone – the list goes on. But as times have changed, we have become much more vigilant about personal safety. The same can be said for the online world. The majority of us are well-aware of cybercrime and are generally on our guard for suspicious emails and websites. Yet despite this everyday vigilance, social engineers find ways to take advantage of our online behavior.

Cybercrime: We are already suspicious

When it comes to business IT security, company leaders generally want to establish a strong cybersecurity culture within their organizations. It’s a very natural thing to do. Human resources department training typically focuses on awareness and highlights typical mistakes that open the doors to a business’ systems and data. It shines a spotlight on what it means to be aware. But conducting security awareness training is not enough to reduce risk completely. Why? The truth is that most people are already “cyber aware.” We have all already formed an opinion on cybersecurity, and whom we trust.

Just think about it. How often do you hear a knock on the door these days, except from an unexpected visitor? A generation ago, a ringing doorbell was nearly cause for celebration. Everyone in the house leaped into action in near perfect unison. But people’s attitudes have changed. We are now not just suspicious, but actually distrustful, of people knocking on our door. We are conscious that not everyone who calls to the door nowadays is legit. It’s born out of the fact that we are aware of the many door-to-door scams or have been a victim of a cold caller ourselves. Besides, due to smartphones, we already know in advance if someone is dropping by – anyone else is considered an uninvited caller. In this way, the escalation of increasingly invasive marketing and social networking manipulation, coupled with technology that makes us easier to track and easier to target, has driven a culture-wide sense of security awareness.

The same can be said for cybersecurity. Nearly everyone is aware of the classic Nigerian 401 scam. In return for a few thousand dollars, email recipients are guaranteed several million in return. Word spread already years ago that this, and many others like it, was a scam; and people now ignore such basic scams out of habit. Like the bogus salesmen calling to the door, we already have a heightened sense of awareness, causing us to be more cautious.

Cybersecurity training: Awareness alone doesn’t solve the problem

There is no question that awareness of cybersecurity is high now and has been for a couple of years – and that’s a good thing. The problem is that while cyber security training within an organization is well intentioned, it is solely invested in creating awareness. At this point, however, we are way past awareness. People are already suspicious of bogus email, SMS messages and calls.

The real focus should be on personal attack surface, e.g. the aforementioned data that makes us easier to track and to target. Attention needs to be given to the significance of personal information, the sharing of it and how to defend it. While we are “aware” cybercrime exists, many of us may not fully understand the implications of actions that open the door to cybercrime. This is partially why social engineering and other large-scale data breaches are often so successful – and you only need to look at the stats.

A 2017 Tenable survey found that nearly all participants were aware of security breaches. What the survey also revealed was that many admitted to not taking some degree of precaution to protect their personal data and have not changed their security habits in the face of a public threat. Not surprisingly, another study from Stanford University and security firm Tessian revealed that nine in ten (88%) data breach incidents are caused by employees’ mistakes – and costly ones at that. In 2020 alone, data breaches cost businesses an average of $3.86 million.

So, what, in light of this, are the best steps to start mitigating risk?

Reduce Employee Burden: Recognition of a person’s attackable surface

When it comes to reducing risk through employee training, businesses need to recognize that many people fall into one of two categories:

  1. There are those who are very concerned about personal data security. This cohort want to keep their data safe and do not want anyone “messing” with their personal information. They are already very much engaged with cybersecurity – they are not the problem.
  2. Then there are those who are the reverse. They are not interested in cyber security. They are aware but they don’t feel at risk, and as such are not willing to spend effort on it.

Trying to “convert” the second group of employees to become champions of cyber hygiene or cybersecurity can be, for a want of a better phrase, a waste of time. Until you can put cybersecurity into personal terms for each person, it is nearly impossible to change entrenched habits and opinions.

However, if you can pinpoint which extra-professional avenues of attack are most likely for an individual’s data profile, you may be able to make progress against this skepticism. It’s about recognition of a person’s attackable surface. Concern for one’s own personal safety will always trump concerns for company safety. Or, put in analog terms, you don’t have to convince suspicious people not to answer the phone; you need to convince them not to publish their phone number in the first place. The smarter everyone is about his or her personal data, the more secure the company will be.

Security awareness training is a common corporate exercise – but is no longer enough to reduce risk. By empowering your employees to safeguard their own digital footprints – along with company data – you can start to develop really formidable foes to cybercrime.

Cybersecurity is a new HR benefit

Cybersecurity has traditionally been seen as a job for IT departments – and most employees assume that cybersecurity is simply a technical issue. But an examination of current threat types shows that social engineering attacks on employees is now a major concern for corporate security. However, protecting employees from social engineering attacks means protecting the whole person – at work and at home. The challenge becomes the line between what is corporate and what is personal. Innovative Human Resources (HR) departments have a solution. Cybersecurity can be a gift to employees, not unlike health insurance. This new benefit further underlines HR’s important role in promoting a healthy corporate culture…including cybersecurity.

Cybersecurity – The role of HR in mitigating risk

It is estimated the financial impact of cybercrime costs the global economy nearly $3 million per minute with 27% of all cyberattacks resulting from employee errors. Many companies are aware that employees are the weakest link in an organization’s cybersecurity. 9 out of 10 times, it is unintentional. Yes, you might get the odd disgruntled employee, but more often than not, employee negligence is the primary source of data breaches. From falling afoul of phishing, to accidental installation of malicious apps and using unsecure networks, the variety and prevalence of cyber-traps are growing daily. Even common behaviors that seem trivial, like shared passwords, lax BYOD habits, remote working, and leaving devices laying around – all can lead to loss of data or even large sums of money.

Since people are a key factor in many cybersecurity-related issues, HR should be involved to minimize the risk. Why? HR is uniquely equipped to humanize and promote security within an organization. Whether it’s through the onboarding process, providing security guidelines or educating employees, the HR department can cover the majority of cybersecurity threats – and your company will be much safer for it. “HR leaders can engage employees in recruitment, culture, and education to boost awareness and adoption of new policies to help IT teams develop a “human firewall” for your organization, turning employees – your greatest security threat – into your greatest asset,” says Marcy Klipfel  of Businessolver.

Some forward-thinking companies already employ the skills and insight of their HR teams to enhance risk mitigation. But as the digital footprint of an individual continues to grow like a ripple effect, and the lines continue to blur between personal and business use of technology, modern cybersecurity requires more than firewalls, antivirus and HR polices. If a business is serious about protecting itself and its employees, it’s time the business started thinking about offering cybersecurity as a HR benefit.

Cybersecurity as an HR benefit

We live in a digital era and, as such, it’s likely that most, if not all, of your employees have a digital footprint. This is normal. Daily, most of us engage in some form of online activity, such as photo sharing, online dating, banking, shopping, gaming, and social/professional networking. Like it or not, these all add to one’s digital footprint. And that’s not all. Others may post photos or information about us online. And then there are search engine histories, smart phone geolocation data, etc.

While an individual’s growing online digital footprint and relentless tracking of all their thoughts and data might not be a problem to them, it may be exploited by those with malicious intent. What your employees do and say online, or how they use digital devices, can make them and your organization vulnerable to a range of security threats. Most hackers are just looking for that one right chance and an employee’s online activities can create an ideal passageway into your company, potentially resulting in unintended, or even catastrophic, consequences.

Unplugging yourself or an employee from the rest of the world is not really an option. But what is an option is that your company can help protect its employees – while protecting itself. While it’s a novel concept, data hygiene management should now be considered the newest employee benefit. Like a person’s health, if things go bad, cybercrime can be very costly for the individual. Like health insurance benefits, cybersecurity benefits reduce the financial risk and give peace of mind.  

Future of cybersecurity

The biggest challenge for HR is explaining the threat of social engineering to individuals while not being perceived as “Big Brother.” Employees can be very wary of privacy, though at the same time may not be very aware of the vulnerability of their personal digital footprint. But everyone is susceptible to cyberattacks and the impact can be severe for both individuals and their employers. The perceived value of cybersecurity as an HR benefit will only increase with time – and with the preponderance of cybercrime. Prescient employers are making moves now to bolster their cybersecurity culture and offer a competitive benefit that will be attractive to employee candidates.

Social engineering: Opportunity makes the thief

It is understandable that, when cybercrime happens to you, you can feel like you were targeted. And you certainly might be correct. However, more often than not, you weren’t originally the target at all. You just provided the best opportunity to the criminal. In most cases, social engineering involves an opportunistic attack that doesn’t – initially – target anyone in particular. Instead, attackers search broadly for weaknesses or vulnerabilities that they can use to mount a more in-depth attack. If they snare a victim in their net, they can then go to work.

It’s nothing personal

Unwanted messages and calls bombard nearly all of us on a regular basis. For most, these solicitations via junk mail, spam email and robocalls are just incredibly annoying – even inducing a bit of eyerolling. Most of the time, we simply hit ignore, mark as spam, delete or toss junk mail in the rubbish knowing that these messages are most likely so-called mass-market scams. Many people are often surprised by the amount of junk or spam they receive, especially because so many of the scams are so obviously illegitimate. But the reason you still get emails from a Nigerian prince offering cash out of the blue in exchange for something is because people continue to fall for such stories. Not huge numbers, but a few. And that’s all it takes to make a profit.

Opportunist attacks are not personalized to their victims and are usually sent to masses of people at the same time. They are akin to drift netters, casting their nets “out there” – whether it’s ransomware, spyware or spam – and see what comes back. The aim is to lure and trick an unsuspecting victim to elicit as much information as possible using SMS, email, WhatsApp and other messaging services, or phone calls. Their motives are primarily for financial gain. They just want money. They don’t have a vendetta against a particular person or company. It’s a virtually anonymous process.

Phishing scams: Opportunity makes the thief

The Nigerian prince story is on the lower end of the scale in terms of a convincing narrative. However, the grammar errors and simplicity in these attacks are actually intentional as they are serving as a filter. They are filtering the “smart” responders out with the goal of refining their list, allowing them to more strategically target their victims. But have you ever stopped to ask yourself why you got the email in the first place? Spam may be a reality, but you are probably getting unwanted attention because you have a wide personal “attack surface.”

Our digital footprint is more public than we would ever imagine. Every time we perform an online action, there is a chance we are contributing to the expansion of our digital footprint. So, while you and I might be aware that the Nigerian princes of the world are not genuine – more sophisticated and successful attacks are also in circulation. If you have a large and messy digital footprint, you are putting yourself on the opportunist radar and are in line to receive more refined and authentic looking queries.

Since cybercriminals are continuously devising clever ways to dupe us in our personal lives, it is just as easy to hoodwink employees into handing over valuable company data. In fact, according to Verizon’s Data Breach Digest 74% of organizations in the United States have been a victim of a successful phishing attack. Fraudsters know that the way to make a quick buck isn’t to spend months attempting to breach an organization’s security, it’s simply to ask nicely for the information they want so they can walk right through the front door.

Opportunity amid a pandemic

With social engineering opportunists tending to take advantage and capitalize on vulnerabilities exposed, the pandemic created ideal conditions to exploit businesses and corporations. In less than a month into the onslaught of the pandemic, phishing emails spiked by over 600% as attackers looked to capitalize on the stress and uncertainty generated by Covid-19. Businesses that were forced to work remotely became more susceptible to opportunists. The pandemic changed the attack surface, Researchers said,“… security protocols have completely changed – firewalls, DLP, and network monitoring are no longer valid. Attackers now have far more access points to probe or exploit, with little-to-no security oversight.”

To mitigate risk, focus on both threat and vulnerability

The standard corporate security structure is optimized to handle specific, targeted attacks on corporate assets. Unfortunately, social engineering is often overlooked because of the very non-specific nature of it. Attack by opportunity only requires unwitting cooperation by an employee who was not specifically targeted but self-selected simply by clicking on a link.

Social engineering may even be more dangerous in our pandemic-driven distributed work environments. Corporate and personal spheres overlap more than ever and can provide social engineer opportunists more footholds into our confidential lives – both private and corporate. Both individuals and corporate security leaders will do well to shift greater focus on vulnerability reduction to provide less opportunity to social engineers.