Essential Guide to Open-Source Intelligence (OSINT)

Exploitation through publicly available information is the single largest threat to companies and their people today.

Known as Open-Source Intelligence, or OSINT, this public data reveals to hackers how they can compromise human targets via social engineering attacks and defeat the most powerful technical solutions.

The bad news for organizations is that the internet makes it easy for attackers to find information about them and their employees to craft convincing attacks.

The good news is that enterprise security teams can also use OSINT for defensive purposes in order to level the playing field and prevent attacks. With companies recognizing the important role this data plays, the global demand for OSINT tools is on the rise, with research predicting a market growth rate of 28.33% between 2022 and 2030. Fortunately, companies can now automatically harness OSINT like never before to protect their people and their assets.

We’ve created this e-book to explain OSINT, how it’s used, and how security professionals can use Picnic’s powerful new technology to take the advantage away from threat actors.

What you’ll learn:

  • What OSINT is
  • The history of OSINT
  • How people collect OSINT
  • The most-used OSINT tools
  • The information people can find with OSINT
  • How cybercriminals use OSINT for social engineering
  • How cybersecurity teams can use OSINT

What is Open-Source Intelligence (OSINT)?

Open-Source Intelligence (OSINT) is information available through public data sources that someone can collect and analyze.

People can engage in OSINT gathering legally using tools that find data on:

  • the “surface web,” including search engines, blogs, and job postings
  • social media
  • databases containing public records

Additionally, malicious actors often use specialized intelligence tools and search engines for finding information on the dark web.

What is the history of OSINT?

Gathering OSINT is not a new phenomenon. However, the information available and the search processes have changed, especially as more people share data on the internet.

During World War II, the Office of Strategic Services established the first Research and Analysis Branch dedicated to collecting OSINT and using it for the war effort. Since then, global military and intelligence services have used publicly available data for their operations.

In the late 1980s, the US military first used the term OSINT, noting its tactical battlefield value. During the 1990s, OSINT became even more important to the US intelligence community, with the 1992 Intelligence Reorganization Act incorporating public information as valuable and the 1994 establishment of the Community Open-Source Program Office (COSPO) within the CIA.

As the internet became more accessible, so did OSINT. From websites with public government data and social media networks, almost anyone can search publicly available data legally and ethically.

Outside the confines of legality and ethics, threat actors use sophisticated tactics to gather data. For criminals, the definition of “public” also includes the dark web where malicious actors share stolen, otherwise-nonpublic personal information like credit card numbers, passwords, and social security numbers.

How do people collect OSINT?

Since OSINT focuses on publicly available information, people can find it using paid and unpaid search methods. Further, their processes can be as simple as a Google search or as complex as creating a specialized tool.

Surface Web
The surface web is the internet that most people use. It’s easy for the general public to search using standard search engines.

Search Engines
When people want to find information, they usually start with generally available search engines. Most people are familiar with how these work. Google’s search engine has become synonymous with looking up facts and data.

  • Google
  • Bing
  • Yahoo!
  • DuckDuckGo
  • Startpage

Blogs
Blogs are regularly updated websites or web pages that people and organizations use to inform readers. An organization’s blog might try to educate readers about topics related to its products or services. A personal blog often shares stories about someone’s interests, like hobbies, books, music, television shows, or movies.

Job Postings
Most companies list job postings on their websites so that interested applicants can find them. Since companies use job postings to attract candidates, researchers can use them to:

  • Locate corporate offices
  • Find Human Resources contacts

Social Media
People and companies increasingly use social media. Many companies have social media marketing strategies that they use to make important announcements, like when they hire a new senior executive or acquire a new company. Similarly, people often share personal stories and information on social media sites.

For example, LinkedIn enables organizations to create digital business networks. However, since the company shares this information publicly, it becomes an OSINT source. As a career-focused social media site, people may be more “trusting” and open to connecting with others.

Some examples of OSINT gathering on LinkedIn include searching by company name for job roles like:

  • Chief executive officer
  • Chief financial officer
  • Account executive

Someone could do a search for account executives at an organization, look at their connections, and then find a senior leadership team member’s information.

Data Brokers/People Search Engines
Data brokers collect and sell personal or corporate data. While they often use public records to aggregate this information, they can also source it privately. As a paid service, they collect data from multiple locations that can include:

  • Census records
  • Electoral rolls
  • Social media
  • Court reports
  • Purchasing history

Some examples of data brokers and people search engines include:

  • PeopleFinderFree
  • Truthfinder
  • Spokeo
  • US Search
  • Whitepages

Custom Search Engines
More technical researchers can build custom search engines. With a custom search engine, a researcher can collect OSINT across multiple social media websites or filter searches by file type.

For example, the Google Programmable Search Engine is a platform enabling web developers to use Google search capabilities on their websites. However, researchers can use this functionality to search across specific websites and take multiple actions. When engaging in OSINT, researchers might create a custom search engine that enables a simultaneous search across various social networks that can isolate each network’s results in their own tab. This streamlines their process, giving them a way to use the collected data more effectively and efficiently.

Specialized Search Engines
Specialized search engines enable researchers to expand their data collection. These provide search options and capabilities that typical search engines lack.

Some examples of specialized search engines include:

  • Wayback Machine: cached website data providing historical information
  • Searx.me: ability to export results and enabling researcher anonymity
  • Exalead: unstructured data to find documents and audio files, including papers or webinars

Caller ID Databases
Caller ID databases enable people to do reverse lookups on phone numbers. While these traditionally only worked for landlines, more databases now provide services for cellular phones. When researchers input a known telephone number, they can retrieve data like:

  • Country
  • Name
  • Carrier name
  • Carrier type

Third-Party Data Breaches
Whether researching legally or illegally, people can find public databases containing information about compromised email addresses and the passwords associated with them.

For example, cybercriminals often post this information on websites like Pastebin. Further, in response to increased data breaches, ethical services now exist, including:

  • Have I Been Pwned
  • Spycloud
  • Scylla
  • Leaked Source
  • Ghost Project
  • PSBDMP

While researchers need an email address to use these services, they provide valuable information by:

  • Confirming that an email address is valid
  • Providing insight into the breach that compromised the email

Since cybercriminals are not held to legal and ethical research requirements, they often download databases of publicly available and stolen databases, then run the data through analytics tools. If they find a username and password for one service, like LinkedIn, they can try those credentials to gain access to a corporate environment.

Custom Tools
Gathering OSINT information from all these diverse locations manually isn’t efficient. Often, researchers create or leverage custom tools. With these tools, they can more rapidly search across all potential locations and search engines.

Dark Web
What people call the dark web is really internet traffic directed through the Tor network that conceals users’ location and network usage. This anonymity makes it more difficult to trace activity back to the user, including websites hosted on the network. Criminal activity thrives on the Tor network because the sites are not hosted on publicly viewable networks.

Download your free copy of Picnic’s OSINT eBook

What Are the Most-Used OSINT Tools?

While threat actors may build their own tools, many ethical researchers leverage pre-existing research tools. Below are some of the OSINT tools often used to uncover publicly available data about people and technologies.

Maltego
Focused on discovering relationships, this gathers data like:

  • Names
  • Email addresses
  • Aliases
  • Companies
  • Websites
  • Document owners
  • Affiliations

It uses several common public information sources, including:

  • DNS records
  • Whois records
  • Search engines
  • Social networks

Then, it provides charts and graphs that uncover the connections between the data points.

Mitaka
Mitaka enables people to research using their web browsers. With the ability to search across more than seventy search engines, it returns information like:

  • IP addresses
  • Domains
  • URLs
  • Hashes
  • ASNs
  • Bitcoin wallet addresses
  • Indicators of Compromise (IoCs)

Spiderfoot
A free tool, Spiderfoot is an application that red teams often use during their reconnaissance activities. Some information that it returns includes:

  • IP addresses
  • CIDR ranges
  • Domains and subdomains
  • ASNs
  • Email addresses
  • Phone numbers
  • Names and usernames
  • Bitcoin addresses

Spyse
Focused on detecting internet assets, Spyse collects and analyzes publicly available data about:

  • Websites
  • Website owners
  • Servers associated with websites
  • Internet of Things (IoT) devices

BuiltWith
BuiltWith provides information about a website’s technology stack and platform. For example, it generates information that includes:

  • Content management system (CMS), like WordPress, Joomla, or Drupal
  • Javascript/CSS libraries, like jQuery or Bootstrap
  • Plugin installed
  • Frameworks
  • Server information
  • Analytics and tracking information

Intelligence X
As an archival service and search engine, Intelligence X enables researchers to obtain historical versions of webpages and leaked data sets, including controversial content.

Some examples of the data that Intelligence X retains include:

  • Lists of compromised VPN passwords exposed on cybercriminal forums
  • Indexed data collected from political figures’ email servers
  • Information from social media site data leaks

Ahmia
Ahmia enables dark web research by making Tor results visible without requiring users to install the browser. However, to open links and results, researchers still need to install the Tor browser to open links and results.

DarkSearch.io
As of January 2022, this service is available only to organizations who request private access. The platform allows researchers to run automated searches of the dark web without requiring them to use .onion versions or install the Tor browser.

Grep.app
Grep.app focuses on git repositories, providing a single search across:

  • GitHub
  • GitLab
  • BitBucket

People use it when searching for code strings associated with:

  • IoCs
  • Vulnerable code
  • Malware

Recon-NG
Recon-NG is a Python-based tool that enables researchers to automate redundant, manual tasks. It offers:

  • Independent modules
  • Database interaction
  • Built-in functions for convenience
  • Interactive help
  • Command completion

Creepy
Another Python-based technology, Creepy is a geolocation OSINT tool that collects data from various online sources, including social media and image hosting sites. Users can

  • Create maps
  • Filter searches based on exact location and/or date
  • Export data

theHarvester
With theHarvester, users can search for:

  • Emails
  • Subdomains
  • IP addresses
  • URLs

It offers both passive search and active DNS brute-forcing capabilities.

Shodan
Shodan is a search engine that both security teams and threat actors use to discover internet-connected devices and services.

The Shodan suite of products includes:

  • Search engine
  • Monitor to track devices
  • Maps
  • Collection of screenshots
  • Collected historical data

TinEye
TinEye is a reverse image search tool that allows researchers to upload images or use URLs. With reverse image lookup, someone can find where a picture was taken so that they can find a physical location.

Metagoofil
With Metagoofil, researchers can scan a domain’s documents and uncover the metadata. The tool provides information about files like:

  • PDFs
  • Word Documents
  • Excel Spreadsheets
  • PowerPoint Presentations

The metadata, or “data about data”, can include information such as:

  • User names
  • Email addresses
  • Printers
  • Software

What information can people find with OSINT?

While all OSINT information is publicly available, most people may not realize what is out there about them and how someone can find it. Even people who think they have a limited digital footprint would be surprised at what OSINT researchers can uncover.

Email Addresses
Today, most people have at least one personal and one professional email address. According to research, 90% of Americans have an email address, averaging 1.75 email addresses each. Typically, people use their email addresses to:

  • Log into social media
  • Access work resources
  • Use ecommerce applications
  • Register for media, like news, professional publication, and streaming services

Usernames
To maintain consistency, many people use the same username across different online services. For example, someone with an email [email protected] might also use jdoe as a social media handle. Further, these are typically the same types of usernames that corporations use for generating user IDs. With this information, cybercriminals can try to connect known usernames to compromised passwords as a part of credential-based attacks.

Addresses
Personal and professional addresses are easily discoverable. On its own, an address may not impact cybersecurity. However, when aggregated with a name or IP address, ethical and criminal actors can use the information to build a relationship with a target.

Phone numbers
When researchers collect and aggregate OSINT, phone numbers become even more valuable. When connecting a person’s name and phone number, someone can spoof, or create a fake version of, that phone number as part of an attack. For example, when a smishing attack sends a text message that appears to come from a trusted contact, the target is more likely to take the action that the attacker requests.

IP Addresses
When someone obtains an IP address, it gives them the ability to do a reverse lookup that gives them a lot of information about the server hosting a domain, including:

  • City
  • State
  • Zip code
  • Open ports

Free threat exposure report

See how a social engineer is most likely to contact you along with how an attacker might attempt to compromise you with Picnic’s free threat exposure report—CheckUp Light.


How do cybercriminals use OSINT for social engineering?

The first step to a successful social engineering attack is to gain a target’s trust or buy-in. People may be skeptical enough to ignore an email from a Nigerian prince, but they’re far less likely to ignore an email from their boss or human resources department.

Cybercriminals leverage OSINT so that they can build their attacks around information that will prompt someone to take an action that’s against their best interests. Further, cybercriminals collect and correlate various data types so that they can build out robust attacks. They rarely just use one type of data, like an email address.

Email Attacks
Phishing, spear phishing, and whaling are all typically email-based social engineering attacks. However, they use OSINT in subtly different ways.

Phishing
With a phishing attack, cybercriminals send out high volumes of fake emails, pretending to come from a legitimate entity. In this case, they really only need the email domain of the entity they want to impersonate.

For example, in a sophisticated attack targeting Office 365 credentials, cybercriminals imitated the domain for the US Department of Labor. They created domains like dol-gov.com, using a legitimate dol.gov domain for replies. The emails sent fake bidding instructions with a PDF that redirected the target to a phishing site where the criminals collected credentials.

Spear Phishing
With a spear phishing attack, cybercriminals might start by doing a LinkedIn search to find someone new to an organization in a high-visibility position, like a Chief Executive Officer (CEO). Once the cybercriminals have this information, they can search LinkedIn for people who will work directly with the new CEO. 

They find the organization’s domain and make a fake, or spoofed, version of it. For example, fakcompany.com would be fakecompany.io. With this fake domain, they create a form that hides the “.io” so that it looks like it’s from the organization’s legitimate domain. 

Building on this, they can then find examples of past statements that the new hire made for the email’s text. They email the form to the targets that they found on LinkedIn, requiring them to supply login credentials when they complete it.  
Between 2013 and 2015, cybercriminals used a spear phishing attack to steal $100 million from Google and Facebook. In this case, they created a fake computer manufacturing company, then sent invoices to targeted employees under the guise of being the legitimate services provider. Instead of paying the real provider, the companies directed the deposits to the cybercriminals’ bank accounts. 

Vishing
Also called “phone phishing” or “voice phishing” attacks, cybercriminals call their targets to deploy the attack. During a vishing attack, cybercriminals will often incorporate pretexting, creating a situation that lures the target into taking action.

Many cybersecurity awareness training modules include pretexting scenarios where someone calls a new employee, pretending to be from human resources. For this attack to work, cybercriminals need to do their OSINT research.

For large organizations that might have upwards of 100 global new hires per week, this scenario provides cyber attackers a significant return on investment. To be successful, attackers need a few different types of OSINT data. First, they need to find people on LinkedIn who recently announced that they joined an organization. Next, they need to find the VOIP data for the organization’s phone system so that they can spoof it. Then, they create a fake HR portal that sends data directly to them. They call the new employees, telling them that to get paid they need to confirm payment data by clicking on a link that they’re sending while on the phone. When the targets enter their credentials, the cybercriminals collect it.

In 2020, attackers compromised 130 Twitter accounts with a vishing attack. Twitter classified this as a phone spear phishing attack, saying that cybercriminals called employees and tricked them into revealing account credentials.

How OSINT Enables Cybersecurity Teams

The good news for organizations is that their security teams can also use OSINT. The information itself is benign. The danger or benefit comes from how someone uses it.

When organizations use OSINT to protect themselves, they can follow the same processes as threat actors. When security teams have access to the same publicly available information that malicious actors have, they can mitigate risk by reducing their digital footprint or implementing additional security controls.

Discover Public-facing Assets
Most security teams leverage OSINT to detect assets connected to the public internet. For example, many security teams use Shodan to detect IoT devices so that they can implement controls or protections.

Locate Information Outside Organization Boundaries
Sometimes, employees share information on social media without realizing that a little personal information can lead to an attack that leads to a breach.

For example, an employee might list their telephone number on LinkedIn. With this information, skilled attackers can implement a successful vishing or smishing attack that could compromise both the personal and corporate accounts of the employee.

When security teams have visibility into this risk, they can implement preventative measures that reduce risk, in this case working with the employee to remove the phone number before it can be leveraged in a social engineering attack.

Identify External Threats
When security teams have OSINT tools, they can monitor dark web forums for stolen credentials that compromise the organization’s security.

According to research, 70% of users tied to breach exposures from 2021 or earlier were still reusing the exposed credentials. Further, more than two out of three people use the same passwords across multiple accounts, meaning a compromised personal password could impact someone’s professional login credentials.

Security teams that can find and link employee personal and professional leaked credentials can use this information to make sure these credentials are no longer being used.

Enhanced Penetration Tests
Penetration tests look for weaknesses in an organization’s security program. As part of this process, penetration testers start with the reconnaissance phase to map out the attack surface of the target. This involves running OSINT, looking for accidental sensitive information leaks across social media, data brokers, and other publicly available data locations. Then they leverage this information to aid their ethical social engineering attacks.

With regular OSINT monitoring, security teams can reduce the number of findings by proactively identifying and mitigating these risks.

Design Adversary Emulations
When security teams engage in adversary emulations, they follow threat actor tactics, techniques, and procedures (TTPs) to test their defensive controls.

For example, when security teams want to emulate a remote desktop protocol attack, they need to follow the same steps that attackers do. Many security teams focus on the steps that attackers take once they gain access to systems because they lack the OSINT visibility to emulate attackers’ social engineering and credential theft capabilities.

When security teams can effectively obtain publicly available data, like information employees post on social media, they can create more realistic emulations. By identifying employees that attackers might target, they can implement controls that proactively address these risks.

For large organizations that might have upwards of 100 global new hires per week, this scenario provides cyber attackers a significant return on investment. To be successful, attackers need a few different types of OSINT data. First, they need to find people on LinkedIn who recently announced that they joined an organization. Next, they need to find the VOIP data for the organization’s phone system so that they can spoof it. Then, they create a fake HR portal that sends data directly to them. They call the new employees, telling them that to get paid they need to confirm payment data by clicking on a link that they’re sending while on the phone. When the targets enter their credentials, the cybercriminals collect it.

In 2020, attackers compromised 130 Twitter accounts with a vishing attack. Twitter classified this as a phone spear phishing attack, saying that cybercriminals called employees and tricked them into revealing account credentials.

Picnic: Automated OSINT Monitoring and Remediation for Enhanced Cybersecurity
Picnic is the first technology platform that allows organizations to fully and automatically harness OSINT for defensive purposes.

The platform provides enterprise security teams with the capability to instantly emulate attacker reconnaissance on the entire OSINT footprint of their organization and its people across the surface web, social media, data brokers, breach repositories, and the deep and dark web. At the same time, Picnic’s technology continuously hunts and flags any exposed data and PII that would be of value to threat actors, identifies likely human targets and pathways to compromise, streamlines external data footprint cleansing, and enhances existing security controls to prevent attacks.

Since attackers have OSINT exposure too, Picnic also monitors for suspicious domains and other attacker infrastructure before these can be leveraged against an organization’s people.

With these preemptive and continuous capabilities, organizations gain an unprecedented level of visibility and control over their OSINT footprint and can substantially reduce a threat actor’s ability to use OSINT successfully against them.

Picnic’s technology marks a decisive moment in the history of OSINT, as it takes away the asymmetrical advantage threat actors have had until now.

Attackers need OSINT to craft their attacks. The public data vulnerabilities revealed during a cybercriminal’s reconnaissance are ultimately what lead to phishing, credential compromise, ransomware, malware, and the like.

Picnic’s platform addresses this problem head-on by providing enterprises and their people with the power to automatically know the full extent of their OSINT exposure, proactively remediate their human risk, and preemptively neutralize the pathways to compromise that their public footprint reveals. In this way, they can detect and prevent attacks before they happen on a scale not previously possible.

SANS FIRST LOOK WHITEPAPER ON PICNIC

SANS First Look Report

Jeff Lomas of SANS discusses the importance of knowing your attack surface from the outside in and how Picnic can help organizations tackle the largest problem in cybersecurity—social engineering.

Just a little bit of exposed personal data can go a long way for a hacker

Hackers today use our exposed personal data against us. More than 90% of the time, cyberattacks are specifically crafted from users’ public data. To a hacker and to cyber specialists in general, this exposed, publicly available information is known as OSINT, or Open-Source Intelligence. OSINT can be any publicly available information a hacker can find on a target, such as data from LinkedIn, Instagram, and other social media sites, data brokers, breach repositories, and elsewhere. Hackers use this data to craft and power social engineering attacks. It is the data that tells the attacker who is a vulnerable and valuable target, how best to contact them, how to establish trust, and how ultimately to trick, coerce, or manipulate them. Social engineering attacks fool people into performing a desired action and criminals use social engineering to lure targets into handing over personal information, opening malicious files, or granting access to sensitive data.

In this post, we highlight some of the ways in which bad actors use our information in social engineering campaigns. Understanding the various ways in which even a limited amount of exposed personal information can be weaponized by social engineers can help us not only become more vigilant and cautious but will hopefully also motivate us to take proactive measures to protect ourselves and our companies before attacks happen.

Hackers need—and harvest!—personal information to craft attacks

In order to identify, choose, and plan attacks against potential targets, threat actors must first conduct OSINT reconnaissance. Hackers have a variety of tools that automate this process. They begin by searching for information and selecting a vulnerable target, and then using the target’s data to create a compelling story that will trick them. The social engineer uses one of several means, such as an email, social media, or a phone call, to contact the target and establish trust. If the communication is convincing enough, the victim will be fooled and unwittingly click a malicious link or give the attacker sensitive information that will be used against them or their company. 

On account of the essential role that public data plays in social engineering attacks, it behooves us to be aware of, and especially limit, the amount of personal information we share online. The larger our digital footprint is, the larger our attack surface is and the more visible we are to social engineers. The more information attackers have on a target, the easier it is for them to craft convincing, and ultimately successful, social engineering attacks. The less visible we are, the less attractive we are to hackers and the less paths to compromise there are to be exploited.

While deleting oneself entirely from the internet in the 21st century is not viable, by carefully manicuring what you share and with whom you share it, you can significantly reduce your visible attack surface and prevent social engineering attacks.

Even a little bit of exposed information can be dangerous

Hackers don’t need much personal information to wreak havoc on your life. They can do a significant amount of damage with just your cell phone number. Typing your number into a people search site, for instance, can reveal your personal information to an attacker in just a few seconds. This information can then be used for social engineering, identity theft, doxing, or other malicious actions, such as taking over your email and other accounts. 

With only your phone number, a hacker can easily determine your email address. They can then contact your mobile provider and claim to be you, route your number to their phone, log into your email, click ‘forgot password,’ and have the reset link sent to them. Once they have your email account, all of your other accounts are potentially vulnerable. This is one reason to avoid using the same username and password across multiple accounts! 

Once acquired, a hacker could also decide to ‘spoof’ your phone number. This makes your number appear on a caller ID even though it is not you. Using this method, a bad actor can impersonate you to trick one of your friends or colleagues, or call you from a spoofed number, one that you may recognize or trust, in an attempt to socially engineer you or to record your voice for use in another scam.

The fact that a hacker can do so much with just a limited amount of information should make us think twice about what we share publicly, even if it’s only our phone number. To see some of your exposed personal data, get your free report below.

GET YOUR FREE REPORT

See your exposed personal data

Exposed data and credential compromise

Hackers can also do a lot of damage with exposed login credentials. Usernames, email addresses, and corresponding passwords become available on the dark web (and the public web!) once they have been involved in a data breach. You can find out if your personal data has been compromised in a breach by checking haveIbeenpwned.com, for example. Whenever this type of information gets exposed, it can leave users vulnerable to credential compromise.

Credential compromise, also known as ‘credential stuffing,’ happens when an attacker obtains a list of breached username and password pairs (“credentials”) from the dark web and then uses automated scripts or ‘bots’ to test them on dozens or even hundreds of website login forms with the goal of gaining access to user accounts. There are massive lists of breached credentials available to hackers on the black market and, since most people reuse passwords across different accounts, it is inevitable that some of these credentials will work on other accounts, either personal or corporate.

Once hackers have access to a customer account through credential stuffing, they can use the account for various nefarious purposes such as stealing assets, making purchases, or obtaining more personal information that can be sold to other hackers. If the breached credentials belong to an employee, the hacker can use that access to compromise a company’s systems and assets. 

Since credential compromise relies on the reuse of passwords, avoiding the reuse of the same or similar passwords across different accounts is critical. Always use strong passwords that are difficult to guess and change them frequently. Additionally, using multi-factor authentication, which requires users to authenticate their login with something they physically have and something they personally know, is a good defense against credential stuffing since an attacker’s bots cannot replicate this validation method. 

Recent real-world examples reveal the dangers of exposed personal data for companies

Companies should be especially wary of the role exposed personal data of employees plays in cyberattacks. Three recent examples that made headlines highlight how just a limited amount of exposed employee information can be used to craft a successful social engineering campaign and breach organizations. 

Twilio and Cloudflare

In August, hackers targeted two security-sensitive companies, Twilio and Cloudflare, as part of a larger ongoing campaign dubbed “Oktapus” that ultimately compromised more than 130 organizations and netted the attackers nearly 10,000 login credentials. In the case of Twilio, the hackers began by cross referencing employee public data from Twilio’s LinkedIn roster (the starting point of most attacks) against existing exposed 3rd party breach data sets (e.g., haveibeenpwnd.com) and data broker data (e.g., white pages). This gave the attackers a list of personal information of employees to target. The hackers then created a fake domain and login page that looked like Twilio’s (twilio-sso.com or twilio-okta.com). Using the acquired personal data, they then sent text messages to employees, which appeared as official company communications. The link in the SMS message directed the employees to the attackers’ fake landing page that impersonated their company’s sign-in page. When the employees entered their corporate login credentials and two-factor codes on the fake page, they ended up handing them over to the attackers, who then used those valid credentials on the actual Twilio login page to access the systems illegally. 

exposed personal data

Although Cloudflare was also targeted in this way, they were able to stop the breach through their use of FIDO MFA keys. Even though they were able to keep the attackers from accessing their systems through advanced security practices, Cloudflare’s CEO, senior security engineer, and incident response leader stated that “This was a sophisticated attack targeting employees and systems in such a way that we believe most organizations would be likely to be breached.”

Indeed, the exposed personal data used to power the Oktapus attacks shows how dangerous even a small amount of public data can be in the hands of a social engineer.

Cisco 

In another example from May of this year, the corporate network of multinational security company Cisco was breached by hackers with links to both the Lapsus$ and Yanluowang ransomware gangs. In this case, the hackers acquired the username or email address of a Cisco employee’s Google account along with the employee’s cell phone number. They targeted the employee’s mobile device with repeated voice phishing attacks with the goal of taking over the Google account. The employee was using a personal Google account that was syncing company login credentials via Google Chrome’s password manager. The account was protected by multi-factor authentication (MFA), however, so the hackers posed as people from the technical support departments of well-known companies and sent the employee a barrage of MFA push requests until the target, out of fatigue, finally agreed to one of them. This gave the attackers access to the Cisco VPN through the user’s account. From there the attackers were able to gain further access, escalate privileges, and drop payloads before being slowed and contained by Cisco. The TTPs (techniques, tactics, and procedures) used in the attack were consistent with pre-ransomware activity.

Uber 

Most recently, the ride-hailing company Uber was breached by a hacker thought to be linked to the Lapsus$ group, who gained initial access by socially engineering an Uber contractor. The attacker had apparently acquired the corporate password of this contractor on the dark web after it had been exposed through malware on the contractor’s personal device. The attacker then repeatedly tried to login to the contractor’s Uber account, which sent multiple two-factor login approval requests to the contractor’s phone.  Finally, the hacker posed as Uber IT and sent a message asking the contractor to approve the sign-in. After successfully exhausting the contractor, the approval was granted, and this provided the hacker with the valid credentials needed to gain access to Uber’s VPN. Once inside, the hacker found a network share that had PowerShell scripts. One of these scripts contained admin credentials for Thycotic [a privileged access management solution]. Once the hacker had access to this, he was able to get access to all other internal systems by using their passwords. 

The Uber hack is a prime example of how, with only a limited amount of exposed personal data and some social engineering, a hacker can easily trick, manipulate, or coerce a human and compromise a company’s systems. See our key takeaways and remediation recommendations.

Limiting exposed personal data to prevent attacks

The examples provided here illustrate some of the common ways our personal information can be successfully weaponized by today’s hackers. It is now more urgent than ever for people and companies to know and manage their exposed public information proactively to help prevent attacks. Attackers are opportunists who care about their ROI. By limiting exposed personal data, it becomes more difficult and therefore more expensive for threat actors to succeed in social engineering attacks. Companies that recognize this fact pattern and take action to protect their employees will be more likely to avoid expensive and damaging breaches.

An electric utility company takes cybersecurity beyond the perimeter

The challenge


This client, like most utilities, possesses a strong culture of safety and a similar commitment to security. As a utility, it also operates in one of the 16 sectors designated by the US Cybersecurity and Infrastructure Security Agency (CISA) as part of the United States’ critical infrastructure. This means that the organization faces a specific set of requirements, which include disciplined cybersecurity practices.

Traditional cybersecurity has focused mainly on the internal environment and on data layers within the organization. For that reason, the organization sought a solution that expanded the purview and practice of cybersecurity beyond its walls. Management felt the need to identify and address vulnerabilities in the data “out there,” where more than 90% of cyberattacks now originate.

They also wanted an external perspective to support an outside-in approach to security. They wanted to know how malicious actors could gather information about users to mount an attack on the company. What could those actors find on social media profiles and what messages could they use to launch socially engineered attacks? What could they learn about the organization’s hardware and software and its methods of authentication? What could they learn about its supply chain: What products does it buy? From whom does it buy these products? How does it pay its vendors? What could attackers learn about the leadership team, the Board, employees, investors, and other stakeholders that would make the organization vulnerable to attacks?

Another goal was to broaden the conversation about cybersecurity within the organization. Given the exposures that can be unwittingly created by users with legitimate access to the organization’s systems, leaders had come to see that cybersecurity is everyone’s responsibility. They also wanted to go beyond simply training and coaching people on how to “be careful” when using their laptops and devices; they wanted easy-to-use tools to support users’ efforts to keep systems secure.

Before learning about Picnic, the security team had worked to understand which publicly available data could create vulnerabilities and, to address reputational risk, what people were saying about the company. Yet these efforts were ad hoc, such as monitoring social media feeds, and they employed few tools, such as customized scripts and open-source tools. They wanted to harness data science to see across the internet and to identify the controls they really needed to have in place.

In sum, the security team realized that their environment lacked a defined perimeter, which meant that firewalls, endpoint protection tools, and role-based access controls could no longer provide the needed level of security.

The solution

Picnic provided both ease of enrollment for employees and tools that enabled employees to easily remove publicly available data on themselves.

Picnic’s capabilities let a user simply agree to be deleted from multiple sources of public data gathering, which Picnic handled for both the user and the organization.

The Picnic Command Center enabled analysts from the security operations team to seek out types of data that expose the organization to risk. That, in turn, positioned the team to educate employees about ways in which an attacker could use a particular type of information against themselves or the company. This created a clear division of responsibility: The organization flagged the risks while the employees controlled the data they deleted or left up.

The organization presented Picnic as a benefit to employees, which it is. Although other identity protection tools are presented that way, they are primarily geared to post-event remediation. In contrast, Picnic enables each employee to identify and deal with their publicly available data in private, so they can lower their individual risk, and by extension risk to the organization. Each employee gets to make changes dictated by their own preferences rather than their employer’s. With information from Picnic, they were able to, for example, adjust the privacy settings on their social media accounts so that only specific family members and friends can view them. Whatever steps they took reduced their exposure to attack—a benefit to them and to the organization.

Clear and consistent communications during rollout clarified both the rationale and use of the tools. Integration with the organization’s existing technology was straightforward, with Picnic tools fitting readily into existing solutions. The client/Picnic team took an agile approach to both the development methodology and operational implementation.

The impact

Picnic has assisted the security staff in identifying vulnerabilities and assisted employees in monitoring and limiting their risk exposures. The tools have provided protective controls for employees while minimizing extra steps and added work on their part. It has also helped the security staff to more effectively identify where potential threats might originate and the various forms that attacks could take.

Yet the impact of Picnic extends beyond what the platform itself does. It has enabled the security staff to launch a broader and deeper conversation about cybersecurity at the organization. This has created the opportunity to better understand, explain, and contribute to the organization’s culture of security. The security staff does not usually use the term “culture of security” with employees but the leadership team discusses it and works to create that culture. Picnic has accelerated that effort.

Picnic has also reduced burdens on the security team. It has helped to establish that everybody needs to maintain high awareness of how their social media settings or internet presence create risks. By their nature, the tools dramatically increase employee engagement in cybersecurity in ways that training sessions or video tutorials cannot.

The Picnic toolset has delivered capabilities that allow security staff to see risks outside of their corporate walls and to mitigate them. The security team can now not only alert users to the risks they face; they have also initiated new controls, such as multi-factor authentication on items that could be of use to an attacker. They have added new controls over remote access and other attack vectors where an attacker could access personal information from a data log or a compromised website. The organization is also using password reset tools that make users’ lives easier, while increasing their efficiency and effectiveness.

While no single solution can eliminate every data security issue, Picnic has broadened the organization’s view of its threat landscape and positioned it to better address risks. It has also reduced its attack surface, broadened the conversation about cyber risk and security, and delivered increased security to employees and the organization. This has occurred in the context of Picnic’s sound and sustainable methodology, process, and program for identifying and addressing social engineering threats.

1 https://www.cisa.gov/critical-infrastructure-sectors

REDTEAM RAW, EPISODE #4: Dhruv Bisani on his journey to becoming the Head of Red Teaming at a UK Cyber Security Consultancy

In the fourth episode of RedTeam Raw, Picnic’s Director of Global Intelligence, Manit Sahib, sits down with Dhruv Bisani, the Head of Red Teaming at a leading UK Consultancy, Eurofins Cyber Security (AKA Commissum).

Dhruv Bisiani talks through his day in the life of a Covert Ethical Hacker (Red Teamer), maintaining good Operational Security to fly under the radar and go undetected (OPSEC), some Red Team war stories, breaking into a Zero Trust environment, Phishing & leveraging Social Engineering. We also run through tips for those looking to get into Cyber Security, the difference between the Red Teaming and Penetration Testing (commonly confused) and the evolution and increase of Purple Teaming and Threat Intelligence.

We explore Dhruv Bisani’s journey of being an international employee (and its challenges and misconceptions), gaining a VISA and sponsorship to work in the UK for a big 4 consultancy PwC, the value of CREST Certifications, CCT APP, CCT INF, CCSAS, CSSAM, and becoming the Red Team Lead for Eurofins Cyber Security. We also explore challenges in the work environment and how to deal with them.

Like and subscribe for future episodes of RedTeam Raw here.

REDTEAM RAW, EPISODE #3: Dimitris Pallis on how he became an experienced penetration tester, ethical hacker, and current Security Consultant at Claranet

In the third episode of RedTeam Raw, Picnic’s Director of Global Intelligence, Manit Sahib, sits down with experienced penetration tester, ethical hacker, and current Security Consultant at Claranet (previously Sec-1), Dimitris Pallis!

We discuss Ukrainian IT army cyberwarfare, Dimitris’ journey to becoming an ethical hacker, how to keep your OpSec when sending out your personal info in your CV, Dimitris’ tips for people wanting to level up in the industry and best resources for preparing to get a job, how to manage time when getting your certifications, red team stories with an important lesson from Dimitris, skills needed to be a good ethical hacker, the problem of social engineering, where things are going in the industry, the need for companies to reduce their attack surface/presence online, tools for OSINT reconnaissance, the need for basic awareness about giving out personal info with two recent dangerous examples from LinkedIn, the lifecycle of a ransomware incident, and final tips from Dimitris.

Like and subscribe for future episodes of RedTeam Raw here: https://www.youtube.com/channel/UCVn3…

REDTEAM RAW, EPISODE #2: Jean-Francois Maes on how he became a SANS Instructor and Offensive Cyber Security Expert (RedTeamer)

In the second episode of RedTeam Raw, Picnic’s Director of Global Intelligence, Manit Sahib, sits down with certified SANS instructor, author, researcher, consultant, and rock star RedTeamer Jean-François Maes, known on Twitter as @Jean_Maes_1994. Based in Belgium, Jean-François is the founder of redteamer.tips and is an avid contributor to the offensive security community. He is currently a security researcher at HelpSystems where he aids the Cobalt-Strike team in developing new features.

We discuss how he got into InfoSec and became a SANS instructor; the difference between pentesting, Red Teaming, and Purple Teaming; the most common ways of gaining a foothold as a RedTeamer; a RedTeam story with flowers from Jean-François; his tool Clippi-B; how he manages his time; motivations and resources for becoming a hacker; advice for getting into the industry and being able to stand out; Jean’s biggest challenge at the moment, and where he sees the industry going.

Like and subscribe for future episodes of RedTeam Raw here: https://www.youtube.com/channel/UCVn3…

RedTeam Raw, Episode #1: Marcello Salvati on how he became a leading Red Teamer (and Cyber Security Expert)

In the very first episode, Picnic’s own Director of Global Intelligence, Manit Sahib, talks with InfoSec legend Marcello Salvati, most famously known as the creator of CrackMapExec and SilentTrinity. He is the founder and CEO of Porchetta Industries, Security Engineer at SpaceX, and is known on Twitter as @byt3bl33d3r. We discuss his perspectives on InfoSec, advice for those getting started in this space, how he got to where he is now, overcoming burnout and managing time, red team stories, and where he thinks InfoSec is heading over the next 10 years.

Like and subscribe for future episodes of RedTeam Raw here: https://www.youtube.com/channel/UCVn3…

FOR LAPSUS$ SOCIAL ENGINEERS, THE ATTACK VECTOR IS DEALER’S CHOICE

By Matt Polak, CEO of Picnic

Two weeks ago, at a closed meeting of cyber leaders focused on emerging threats, the group agreed that somewhere between “most” and “100%” of cyber incidents plaguing their organizations pivoted on social engineering. That’s no secret, of course, as social engineering is widely reported as the critical vector in more than 90% of attacks.

LAPSUS$, a hacking group with a reputation for bribery and extortion fueled by a kaleidoscope of social engineering techniques, typifies the actors in this emerging threat landscape. In the past four months, they’ve reportedly breached Microsoft, NVIDIA, Samsung, Vodafone and Ubisoft. Last week, they added Okta to the trophy case.

For the recent Okta breach, theories abound about how the specific attack chain played out, but it will be some time before those investigations yield public, validated specifics. 

As experts in social engineering, we decided to answer the question ourselves—with so many ways to attack, how would we have done it? Our thoughts and findings are shared below, with some elements redacted to prevent malicious use.

How Targeted was this Social Engineering Attack?

To start, we know that Okta’s public disclosure indicates the attacker targeted a support engineer’s computer, gained access, installed software supporting remote desktop protocol (RDP) and then used that software to continue their infiltration:

“Our investigation determined that the screenshots…were taken from a Sitel support engineer’s computer upon which an attacker had obtained remote access using RDP…So while the attacker never gained access to the Okta service via account takeover, a machine that was logged into Okta was compromised and they were able to obtain screenshots and control the machine through the RDP session.”

For attackers to successfully leverage RDP, they must:

  1. Be able to identify the location of the target device—the IP address.
  2. Know that the device can support RDP—Windows devices only.
  3. Have knowledge that RDP is exposed—an open RDP port is not a default setting.

Let’s take a look at each of these in more detail: 

How Can an Attacker Identify Target Devices to Exploit RDP? 

Sophisticated attackers don’t “boil the ocean” in the hope of identifying an open port into a whale like Okta—there are 22 billion connected devices on the internet. In fact, LAPSUS$ is a group with a history of leveraging RDP in their attacks, to the point that they are openly offering cash for credentials to the employees of target organizations if RDP can be installed—quite a shortcut. 

Putting aside the cultivation of an insider threat, attackers would rightly assume a company like Okta is a hard target, and that accessing it via connected third parties would be an easier path to success.

Our team regularly emulates sophisticated threat actor behaviors, so we started by mapping the relationships between Okta and different organizations, including contractors and key customers. Cyber hygiene problems are often far worse for large organizations than individuals, and our methods quickly uncovered data that would be valuable to threat actors. For example, Okta’s relationships with some suppliers are detailed here, which led us to information on Sitel / Sykes in this document. Both are examples of information that can be directly weaponized by motivated attackers.

Two killer insights from these documents:

  1. Sykes, a subsidiary of Sitel, provides external technical support to Okta. 
  2. Sykes uses remote desktop protocol as a matter of policy.

This information makes an attacker’s job easier, and would be particularly interesting to a group like LAPSUS$—an RDP-reliant contractor with direct access to Okta’s systems is a perfect target.

Recon 101: Exploit Weak Operational Security Practices

With a target company identified, we ran a quick search of LinkedIn to reveal thousands of Sitel employees discussing different levels of privileged access to their customer environments. These technical support contractors are the most likely targets of attacks like the ones catching headlines today. Despite the investigation and negative publicity associated with this attack, more than a dozen Sitel employees are still discussing privileged access in the context of their work with Okta (nevermind the dozens of other companies). 

Now that we have defined this group, our focus narrows to deep OSINT collection on these individuals—an area where Picnic has substantial expertise. OSINT stands for open-source intelligence, and it is the process by which disparate pieces of public information are assembled to create a clear picture of a person’s life, a company, a situation, or an organization. Suffice to say that our standard, automated reconnaissance was sufficient to craft compelling pretext-driven attacks for most of our target group. 

To cast this virtual process in a slightly different light, imagine a thief casing your neighborhood. Good thieves spend weeks conducting reconnaissance to identify their targets. They walk the streets and take careful notes about houses with obscured entryways, unkempt hedges, security lights and cameras, or valuables in plain sight. 

Social engineers are no different: they are essentially walking around the virtual world looking for indicators of opportunity and easy marks.  

Before we explore how to go from reconnaissance to the hardware exploit, let’s recap:

  1. We are emulating threat actor behaviors before Okta’s breach.
  2. We conducted organizational reconnaissance on our target: Okta.
  3. We identified a contractor likely to have privileged access to the target: Sitel.
  4. We narrowed the scope to identify people within Sitel who could be good targets.
  5. We further narrowed our focus to a select group of people that appear to be easy targets based on their personal digital footprints.

All of this has been done using OSINT. The next steps in the process are provided as hypothetical examples only. Picnic did not actively engage any of the identified Sitel targets via the techniques below—that would be inappropriate and unethical without permission. 

Identifying the Location of the Device for RDP Exploit

There are three ways that attackers can identify the location of a device online: 

  1. Pixel tracking
  2. Phishing
  3. OSINT reconnaissance

Just as we conducted OSINT reconnaissance on people and companies, the same process is possible to identify the location of the target device. By cross-referencing multiple sources of information such as data breaches and data brokers, an attacker can identify and leverage IP addresses and physical addresses to zero in on device locations. This is always the preferred approach because there is no risk that the attacker will expose their actions. 

Pixel tracking is a common attacker (and marketer!) technique to know when, and importantly where, an email has been opened. For the attacker, this is an easy way to identify a device location. Phishing is similar to pixel tracking: a clicked link can provide an attacker with valuable device and location intelligence, but pixel tracking only requires that an image be viewed in an email client. No clicks necessary. 

Pixel tracking and phishing are examples of technical reconnaissance that were more easily thwarted pre-COVID, when employees were cocooned in corporate security layers. With significant portions of knowledge workers still working at home, security teams must contend with variable and amorphous attack surfaces.

For social engineers, this distribution of knowledge workers is an asymmetric advantage. Without a boundary between work-life and home-life—the available surface area on which to conduct reconnaissance and ply attacks is essentially doubled.

Social engineering’s role in the RDP exploit

According to Okta’s press release, an attacker “obtained remote access using RDP” to a computer owned by Sitel. Based on threat actor emulation conducted by our team and the typical LAPSUS$ approach, it is clear that social engineering played a key role in this attack, which was likely via a targeted spear phishing campaign, outright bribery, or similar delivery mechanism, which would have provided attackers not only with device location information needed for the RDP exploit, but also important information about the device and other security controls. 

Remember that social engineers are hackers that focus on tricking people so they can defeat technical controls. Tricking people is easy when you know something personal about them—in fact, our research indicates attackers are at least 200x more likely to trick their targets when the attack leverages personal information. 

The amount of time, energy, and resources required to complete this reconnaissance was significant, but it was made easier by the two key documents found during our initial recon on the target. While there are other breadcrumbs that could have led us down the same path, many of those paths offered less clear value, while these two documents essentially pointed to “easy access this way.” Finding these documents quickly and easily means that hackers are likely to prioritize this attack path over others—the easier it is, the less time and resources it consumes, and the greater the return on effort. 

Key learnings for cyber defenders

Recognize you are at war. Make no mistake about it, we are in a war that is being fought in cyberspace, and unfortunately companies like Okta and Sitel are collateral damage. Just as in a hot war, one of the most successful methods for countering insurgent attacks is to “turn the map around” to see your defenses from the perspective of the enemy. This outside-in way of thinking offers critical differentiation in the security-strategy development process, where we desperately need to change the paradigm and take proactive measures to stop attacks before they happen. I wrote another short article about how to think like an attacker that might be helpful if you are new to this approach.

Be proactive and use MITRE—all of it. The prevailing method used by cyber defenders to map attacker techniques and reduce risk is called the MITRE ATT&CK framework. The design of the framework maps fourteen stages of an attack from the start (aptly called Reconnaissance) through its end (called Impact)—our team emulated attacker behaviors during the reconnaissance stage of the attack in this example. Cyber defenders are skilled at reacting to incidents mainly because legacy technologies are reactive in nature. MITRE recommends a proactive approach to remediating the reconnaissance stage to “limit or remove information” harvested by hackers. Defenders have an opportunity to be proactive and leverage new technologies that expand visibility and proactive remediation beyond the corporate firewall into the first stage of an attack. Curtailing hacker reconnaissance by removing the data hackers need to plan and launch their attack is the best practice according to MITRE. 

Get ahead of regulations. Federal regulators are also coming upstream of the attack and have signaled a shift with new SEC disclosure guidance, which requires companies to disclose cybersecurity incidents sooner. Specifically, one key aspect of the new rule touches on “…whether the [company] has remediated or is currently remediating the incident.” New technologies that emulate threat actor reconnaissance can make cyber defenders proactive protectors of an organization’s employees, contractors, and customers long before problems escalate to front page news. These new technologies allow companies to remediate risk at the reconnaissance stage of the attack—an entirely new technology advantage for cyber defenders. 

Every single attack begins with research. Removing the data that hackers need to connect their attack plans to their human targets is the first and best step for companies who want to avoid costly breaches, damaging headlines, and stock price shocks.

How to sharpen your corporate social media policy for today’s threats

Using social media is, without a doubt, one of the most popular online activities that internet users engage in. Businesses have also discovered how to leverage social media to create opportunities for their brands. However, the use of these platforms has also created many risks. Not only can a bad social media post spiral into a full-blown PR crisis, but social media has become a data channel that cybercriminals exploit regularly to steal sensitive corporate information or cause huge reputation damage. Many businesses create a social media policy for their organization but often don’t understand how to fully protect themselves.

The Social Media Policy

It is said that 3.96 billion people and 88% (and rising) of companies currently use social media platforms worldwide. Despite its high usage, social media culture is still relatively new territory for both employers and employees. Businesses have recognized that unwise social media can create detrimental outcomes, but the social media policies these companies develop show a level of naivete when it comes to understanding risk.

The corporate social media policy is often a document that resides in a company’s intranet rarely unchanged from the date of inception. It is often a standard practice to include the social media policy at point of employee on-boarding as part of the contractual process between employee and employee. Typically, the contents of the policy are centered around the do’s and don’ts of employee usage, regulatory or compliance obligations and will explain expectations in terms of employee conduct online. For example, Dell Global’s Social Media Policy is reported to be as follows:

  1. Protect Information
  2. Be Transparent and Disclose
  3. Follow the Law, Follow the Code of Conduct
  4. Be Responsible
  5. Be Nice, Have Fun and Connect
  6. Social Media Account Ownership

The overall goal is to set expectations for appropriate behaviour and ensure that an employee’s usage will not expose the company to legal problems or public embarrassment.

The example policy is also remarkably vague. There are probably a couple of reasons for this. Today’s HR departments are very sensitive to employee privacy concerns. There may be a reluctance to lay down specific rules for behaviour that may seem subjective and intrusive.

However, there is a difference between something that is embarrassing and something that is dangerous. Many companies like this are clearly not concerned about network security implications and how employee actions online may compromise both personal and corporate security. The reality is that there is a real need for specific rules (or at least “tips”) regarding how employees present personal data about themselves on social media.

Social media content is highly susceptible to cybercriminals

Social media usage exposes company networks to hacks, viruses and privacy breaches. How? Social media encourages people to share personal information or Personally Identifiable Information (PII). Even the most cautious and well-meaning employee can give away information they should not or accidentally disclose sensitive company information. With this data, cyber criminals who use social engineering techniques can more effectively exploit the gullibility and misplaced trust of many social media users – having serious consequences for those users and their employers’ networks.

All it takes is one mistake. According to the latest EY Global Information Security Survey 59% of organizations had a “material or significant incident” in the past 12 months. Research also found that 21% of organizations have been infiltrated by malware via Facebook and 13% report that their organization has been infiltrated by malware via YouTube. So, what can be done to reduce the risk and ensure your employees and your brand are protected?

The Social Media Policy: What you can do to safeguard against potential attacks

The first step should be to implement a detailed and effective social media policy. While 80% of businesses report having a social media policy in place, the reality is the majority of policies (58%) could be described as general in nature – only 28% have a detailed and thorough policy. So, what additional guidance should your social media policy include? Be focused on data exposure as much as reputation. Here are just a few examples of some rules to publish to get started:

  1. Don’t accidentally describe your tech stack: If you are a technical person, like an engineer, you may want to post your technical proficiencies online. However, combined with your job title, you could end up describing the technical infrastructure of your company, which, of course, may give information to a hacker or social engineer that they need to attack the company. So, what might seem like a clear description of your current employment and career path, in today’s world, you are only revealing information that won’t actually help you but might harm you if it falls into the wrong hands.
  2. Don’t post your resume online: Yes, your LinkedIn page is a resume…but it isn’t. Resumes typically contain personal contact information that can be protected by LinkedIn’s UI structure. Remember that resumes are artifacts from old one-to-one communications between job seeker and employer. In today’s world, you are only revealing information that won’t necessarily help you, and but might actually harm you if it falls into the wrong hands.
  3. Pay attention when providing personal information online: In general, we all should be wary of giving out information that helps make us personally identifiable. For example, middle name, birth place, marriage status, check-in and sharing current location status. Each of these bits of information are innocent in themselves, but used in combination with other information, social engineers are equipped with more tools to attack you or leverage your personal data to get access into sensitive parts of your company.
  4. Help employees spot suspicious activity: While employees can be your weakest link when it comes to potential cybersecurity risks, they can also be your greatest asset in protecting your company. Educating and teaching employees on how to spot and identify suspicious activity such as dubious links or downloads will also go a long way in reducing potential attacks and malware intrusion in your computer systems.

For any businesses, social media platforms can be a gateway to reaching larger audiences. However, they have also gained the attention of cyber-criminals who are more than willing to use them against you. Considering the average data breach costs companies in the U.S. $7.91 million, protecting company, customer, partner, and employee data cannot be understated. Businesses with a holistic social media policy in place will be in a better position to protect both their employees and organization against potential attacks.

An ocean of data…and of ears

How much data is produced every day? A quick Google search will tell you the current estimate stands at 2.5 quintillion bytes. For those of us that don’t know the difference between our zettabytes and yottabytes, that’s 2.5 followed by a staggering 18 zeros! Basically, the simple answer is a lot. A lot of data is produced and collected every day – and it is growing exponentially.

It might be hard to believe but the vast majority of the world’s data has been created in the last few years. Fueled by the internet of things and the perpetual growth of connected devices and sensors, data continues to grow at an ever-increasing rate as more of our world becomes digitized and ‘datafied’. In fact, IDC predicts the world’s data will grow to 175 zettabytes by 2025. It’s mind-boggling to think that humans are generating this, particularly when looked at in the context of one day. Or is it?

Data captured and stored daily includes anything and everything from photos uploaded to social media from your latest vacation, to every time you shout at your Google Home or Amazon Echo to turn on the radio or add to the shopping list, even information gathered by the Curiosity rover currently exploring Mars. Every digital interaction you have is captured. Every time you buy something with your contactless debit card? Every time you stream a song, movie or podcast? It’s all data. When you walk down the street or go for a drive, if you’ve a digital device, whether is your smartphone, smartwatch, or both – more data.

The majority of us are aware, possibly apathetic, that this data is collected by companies – but what might be more pernicious is the number of listeners out there and the level of granular engagement that is tracked. From device usage to Facebook likes, Twitches, online comments, even viewing-but-skipping-over a photograph in your feed, whether you swipe left or right on Tinder, filters you apply on selfies – this is all captured and stored. If you have a Kindle, Amazon knows not only how often you change a page but also whether you tap or swipe the screen to do so. When it comes to Netflix, yes, they know what you have watched but they also capture what you search for, how far you’ve gotten through a movie and more. In other words, big data captures the most mundane and intimate moments of people’s lives.

It’s not overly surprising that companies want to harvest as much about us as possible because – well, why wouldn’t they? The personal information users give away for free is transformed into a precious commodity. The more data produced, the more information they have to monetize, whether it’s to help them target advertisements at us, track high-traffic areas in stores, show us more dog videos to keep us on their site longer, or even sell to third parties. For the companies, there’s no downside to limitless data collection.

Data management: Data protection is weak

The nature of technology evolution is that we moved from ephemeral management of data to permanent management of data. The driver of that is functionality. On the one hand, the economics of the situation make it so that there is very little cost to storing massive amounts of data. However, what of the security of that data – the personal, the mundane, the intimate day-to-day details of our lives that we in some cases unwillingly impart?

Many express concerns about Google, Facebook and Amazon having too much influence. Others believe it matters not what information is collected but what inferences and predictions are made based upon it. How companies can use it to exert influence like whether someone should maintain their health care benefits, or be released on bail – or even whether governments could influence the electoral – Cambridge Analytica, I hear you shout. However, while these are valid concerns, what should be more troubling is the prospect of said personal data falling into the wrong hands.

Security breaches have become all too common. In 2019, cyber-attacks were considered among the top five risks to global stability. Yahoo holds the record for the largest data breach of all time with 3 billion compromised accounts. Other recent notable breaches include First American Financial Corp. who had 885 million records exposed online including bank transactions, social security numbers and more; and Facebook saw 540 million user records exposed on the Amazon cloud server. However, they are certainly not alone sitting atop a long list of breaches. Moreover, while it is certainly easier to point the finger in the direction of hackers, well-known brands including Microsoft, Estee Lauder and MGM Resorts have accidentally exposed data online – visible and unprotected for any and all to claim.

COVID-19 has only compounded the issue, providing perfect conditions for cyberattacks and data breaches. By the end of Q2, 2020 it was said to be the “worst year on record” in terms of total records exposed. By October, the number of records breached had grown to a mind-boggling 36 billion.

Brands and companies – mostly – do not have bad intentions. They are guilty of greed perhaps, but these breach examples highlight how ill-prepared the industry is in protecting harvested data. The volume collected along with often lack-luster security provides easy pickings for exploitation. In the wrong hands, our seemingly mundane data can be combined with other data streams to provide ammunition to conduct an effective social engineering campaign. For example, there is a lot of information that can be “triangulated” about you that may not be represented by explicit data. Even just by watching when and how you behave on the web, social engineers can determine who your friends and associates are. Think that doesn’t mean much? That information is a key ingredient to many kinds of fraud and impersonations.

One could postulate that the progress of social engineers should not be thought of merely as an impressive technological advancement in cybercrime. Rather these criminals have peripherally benefitted from every other industry’s investment in data harvesting.

Data management: Rethinking data exposure

We give up more data than we’ll ever know. While it would be nearly impossible, if not unrealistic, to shut down this type of collection completely, we need to rethink how much we unwittingly disclose to help reduce the risk of falling foul to cybercrime.

Are we thinking about Surveillance Capitalism the right way?

I recently purchased a greenhouse from a well-known catalogue retailer – now I’m swamped with Google and Facebook ads for greenhouse accessories and all manner of gardening paraphernalia. Ever wonder why this happens? The answer is that our data is systematically captured and then used to market to us, in a broad-scale set of processes known collectively as surveillance capitalism – a set of processes that are both pervasive and here to stay. While many of us dismiss the bombardment of ads as trivial, there are those who would argue that we need to be more au fait about this use of our data. While many people debate the intentions of those who conduct and profit from surveillance capitalism, the real concerns may be not simply the amassing of an incredible volume of personal data and its unprecedented synthesis; but moreover, the normalization of the surveillance techniques themselves that can fall into anyone’s hands.

What is surveillance capitalism?

The term surveillance capitalism was coined in 2014 by Shoshana Zuboff – a Harvard Business School Professor. In a book of the same name by Zuboff, she imparts that surveillance capitalism is an economic system centered around the commodification of personal data with the core purpose of profit-making. She states that surveillance capitalism claims our private digital experience as its source of free raw material and translates that raw material into behavioral data. In layman’s terms, surveillance capitalism outlines how commercial corporations – such as Google and Facebook – use data harvested from us to sell advertising, goods, and services. If anything, surveillance capitalism could be described as the business model of the internet.

Big Brother is watching, and we appear to be okay with it

Google pioneered surveillance capitalism – they were the first company to tap into this new form of profit-making. Now, it dominates the market. Tech companies, data brokers and other players continuously capture as much user data as possible not only to predict our behavior but also to influence and modify it so that it can be further used for commercial purposes. With so much to gain from digital data, surveillance capitalism is a trend that has spread far beyond big tech companies. Every bank, insurance company, supermarket, mobile phone operator, etc., now has its own surveillance capitalism strategy in place. Zuboff believes that this surveillance by private firms is a crisis as serious as climate change. She argues that it is a visible power grab that wields enormous economic and political influence. Should we really be more concerned about this state of affairs?

Many of us know and are aware that our data is being taken without our knowledge. We know big companies use data to manipulate us into becoming more predictable and more reliable consumers. As consumers we recognize that privacy concerns must be balanced against other societal goods. Some might say that what they are doing is really just marketing that has been adapted and updated for the digital era.

Many digital companies have been upfront about the trade-offs involved in using their products. Even Zuboff herself notes, “Privacy, they said, was the price one must pay for the abundant rewards of information, connection, and other digital goods when, where, and how you want them.”

It is interesting how the public view of privacy can quickly change based on our perception of who is collecting information and why. Generally, when it appears that we are getting back some perceived economic value, we have a mixed response to surveillance. But when it comes to government surveillance, the public broadly disapproves of invasions of privacy – even though the government utilizes the same core technology and collects the same sorts of data as the private sector. Events like the Cambridge Analytica scandal and Edward Snowden’s historic leak of US surveillance efforts highlighted the risk of political manipulation through data exploitation and reinforced public concerns around government surveillance and inference, weakening public trust.

The real security risk of surveillance capitalism

The morality and legality of commercial and governmental surveillance is often in the news. Less discussed, however, are the increased security risks the surveillance capitalism model creates for companies, governments, and individuals. Commercial and government data troves are, simply put, targets for social engineers. And the wealth of data underpinning surveillance capitalism is not just itself susceptible to attacks: it enables more effective social engineering crimes when accessed, in large part by adopting the same targeting techniques used by cutting-edge marketeers.

Data captured via surveillance capitalism can include details pertaining to finances, personal interests, consumption patterns, medical history, career path – in short, the raw material needed to carry out crimes like identity theft, business email compromise and even extortion and blackmail. It helps threat actors reach users across the web with ease and little oversight, since so much of the synthesis is automated. The bottom line is surveillance capitalism makes it relatively easy for bad guys to get their hands on rich data sets of highly personal information. It provides them with a substantial search facility to find and profile their next target and victim.

Data is not necessarily dangerous by itself. We all leave data trails as we live our digital life. Unconnected bits of data in an ocean of similar data don’t provide much of a foothold to cyber criminals. But surveillance capitalism has created an incentive to be much smarter about the synthesis of data. Now companies (and governments) are pulling all those data trails together to create a fuller picture of ‘you.’ Suddenly, everything is in one place. It is the concentration and rationalization of the data that now provides bad actors an easy way to steal identities and worse.

And the risk doesn’t end there. The science and techniques for surveillance, tracking and synthesis are being constantly improved. These same techniques can easily be weaponized if they fall into the wrong hands. So, whether or not a commercial enterprise has the intent to do harm or manipulate you may miss the larger point. Social engineers are like bees to honey for the data and methods of surveillance capitalism. The real concern is whether the many “well-intentioned” companies now storing gobs of sensitive information can keep your personal data secure.

Surveillance capitalism: The bigger picture

There is no denying that we’re fundamentally willing to exchange some measure of privacy for convenience. We also know that steps, albeit baby ones, have and continue to be taken around privacy and the right to be forgotten. But we also need to acknowledge the bigger issue of surveillance capitalism: it is not immune to surveillance itself and the personal data that it reaps may put us all in danger.

Social engineering: Opportunity makes the thief

It is understandable that, when cybercrime happens to you, you can feel like you were targeted. And you certainly might be correct. However, more often than not, you weren’t originally the target at all. You just provided the best opportunity to the criminal. In most cases, social engineering involves an opportunistic attack that doesn’t – initially – target anyone in particular. Instead, attackers search broadly for weaknesses or vulnerabilities that they can use to mount a more in-depth attack. If they snare a victim in their net, they can then go to work.

It’s nothing personal

Unwanted messages and calls bombard nearly all of us on a regular basis. For most, these solicitations via junk mail, spam email and robocalls are just incredibly annoying – even inducing a bit of eyerolling. Most of the time, we simply hit ignore, mark as spam, delete or toss junk mail in the rubbish knowing that these messages are most likely so-called mass-market scams. Many people are often surprised by the amount of junk or spam they receive, especially because so many of the scams are so obviously illegitimate. But the reason you still get emails from a Nigerian prince offering cash out of the blue in exchange for something is because people continue to fall for such stories. Not huge numbers, but a few. And that’s all it takes to make a profit.

Opportunist attacks are not personalized to their victims and are usually sent to masses of people at the same time. They are akin to drift netters, casting their nets “out there” – whether it’s ransomware, spyware or spam – and see what comes back. The aim is to lure and trick an unsuspecting victim to elicit as much information as possible using SMS, email, WhatsApp and other messaging services, or phone calls. Their motives are primarily for financial gain. They just want money. They don’t have a vendetta against a particular person or company. It’s a virtually anonymous process.

Phishing scams: Opportunity makes the thief

The Nigerian prince story is on the lower end of the scale in terms of a convincing narrative. However, the grammar errors and simplicity in these attacks are actually intentional as they are serving as a filter. They are filtering the “smart” responders out with the goal of refining their list, allowing them to more strategically target their victims. But have you ever stopped to ask yourself why you got the email in the first place? Spam may be a reality, but you are probably getting unwanted attention because you have a wide personal “attack surface.”

Our digital footprint is more public than we would ever imagine. Every time we perform an online action, there is a chance we are contributing to the expansion of our digital footprint. So, while you and I might be aware that the Nigerian princes of the world are not genuine – more sophisticated and successful attacks are also in circulation. If you have a large and messy digital footprint, you are putting yourself on the opportunist radar and are in line to receive more refined and authentic looking queries.

Since cybercriminals are continuously devising clever ways to dupe us in our personal lives, it is just as easy to hoodwink employees into handing over valuable company data. In fact, according to Verizon’s Data Breach Digest 74% of organizations in the United States have been a victim of a successful phishing attack. Fraudsters know that the way to make a quick buck isn’t to spend months attempting to breach an organization’s security, it’s simply to ask nicely for the information they want so they can walk right through the front door.

Opportunity amid a pandemic

With social engineering opportunists tending to take advantage and capitalize on vulnerabilities exposed, the pandemic created ideal conditions to exploit businesses and corporations. In less than a month into the onslaught of the pandemic, phishing emails spiked by over 600% as attackers looked to capitalize on the stress and uncertainty generated by Covid-19. Businesses that were forced to work remotely became more susceptible to opportunists. The pandemic changed the attack surface, Researchers said,“… security protocols have completely changed – firewalls, DLP, and network monitoring are no longer valid. Attackers now have far more access points to probe or exploit, with little-to-no security oversight.”

To mitigate risk, focus on both threat and vulnerability

The standard corporate security structure is optimized to handle specific, targeted attacks on corporate assets. Unfortunately, social engineering is often overlooked because of the very non-specific nature of it. Attack by opportunity only requires unwitting cooperation by an employee who was not specifically targeted but self-selected simply by clicking on a link.

Social engineering may even be more dangerous in our pandemic-driven distributed work environments. Corporate and personal spheres overlap more than ever and can provide social engineer opportunists more footholds into our confidential lives – both private and corporate. Both individuals and corporate security leaders will do well to shift greater focus on vulnerability reduction to provide less opportunity to social engineers.

Psychology is the social engineer’s best friend

Social engineering cyber-attacks have rocketed to the forefront of cyber-security risk and have wreaked havoc on large and small companies alike. Just like a Renaissance actor drawn to Shakespeare’s genius work, the modern social engineer is attracted to the ever-growing pool of information fueled by data brokers. These criminals ply their trade by exploiting the vulnerabilities of an individual and their tactics are known as phishing, baiting, scareware, and tailgating, just to name a few. What is so unique about the social engineer is that their methods are designed to take advantage of the common traits of human psychology.

Social engineers may simply send phishing emails to the target of their choice, or they could work to build a relationship with the target in person, through conversation, or even through spying. Most victims are only guilty of trust. For example, take the case of Barbara Corcoran, famous Shark Tank judge. She fell victim to a phishing scam in 2020 resulting in a loss of roughly 400,000 USD. The social engineer simply posed as her assistant and sent emails to her bookkeeper requesting renewal payment on real estate investments.

In order to combat social engineering, we must first understand the nuances of the interaction between social engineer and target. First and foremost, we must recognize that social engineering attacks are a kind of psychological scheme to exploit an individual through manipulation and persuasion. While many firms have tried to create technical barriers to social engineering attacks, they have not had much success. Why? Social engineering is more than a series of emails or impersonations. It includes intimate relationship building – the purposeful research and reconnaissance into a person’s life, feelings, thoughts, and culture. The doorway to social engineering success is not a firewall – it is the human response to stimuli. As such, we should analyze these attacks through a psychological lens.

In Human Cognition Through the Lens of Social Engineering Cyber Attacks, Rosana Montañez, evaluates the four basic components of human cognition in psychology centered around information processing: perception, working memory, decision making, and action. Together, these pillars of cognitive processing influence each other and work together to drive and generate behavior. To illustrate by way of example: when driving on a highway, you must first evaluate your surroundings. Where are the cars around you? Is there traffic ahead? What is the speed limit? Next, you must use your working memory to pull information from past experiences. The brain sends out a code; last time there were no cars around you, and you were below the speed limit, you were able to change lanes to go faster. With this new information, you now have a decision to make. As the driver, you use this information, and perform the action of changing lanes.

In the context of cyber-attacks, social engineering is a form of behavioral manipulation. But how is the attacker able to access the complex system of cognition to change the action and behavior of the target? To further dissect cognition, Montañez considers how “these basic cognitive processes can be influenced, for better or worse, by a few important factors that are demonstrably relevant to cybersecurity.” These factors are defined as short and long factors and may be the opening that attackers can leverage to strengthen the success of their attack. Short term factors include concepts of workload and stress. Long term factors evaluate age, culture, or job experience.

In a recent study, researchers evaluated phishing behavior and the likelihood an employee would click a phishing link. It was found that those who perceived their workload to be excessive were more likely to click the phishing email. Cognitive workload causes individuals to filter out elements that are not associated with the primary tasks. More often than not, cyber security is not actively thought about and therefore results in the greater likelihood of being overlooked. This effect is known as inattentional blindness and restricts a person from being able to recognize unanticipated events not associated with the task at hand.

Stress also may be responsible for weakening the ability of an employee from recognizing the deceptive indicators that are present in cyber messages or phishing emails. Other factors such as age or culture, domain knowledge, and experience have anticipatory principles that can determine the likelihood for being deceived. As most would expect, having more cyber-security knowledge and experience in a given job reduces the risk of cyber-attacks victimhood. Similarly, as age increases there is a decrease in risk for cyber-attacks because of job experience and accumulated cyber-security knowledge. However, eventually the impact of age and experience reaches a plateau and inverts when seniors (with less experience in modern technology) become exposed. Interestingly, gender or personality were inconclusive when evaluating their impact on cyber-attack susceptibility.

So how do we go about defending against cyber-attacks and improving the untrustworthy mind? The short answer is we don’t. As the age-old security acronym PICNIC suggests, the Problem exists “in the chair” and “not in the computer.” Across many different studies and the experiences of companies themselves, training methods that ask people to make conscious efforts to defend against social engineering cyber-attacks have been unsuccessful. If technological barriers don’t work and cognitive responses can’t be changed, then what is the answer? The solution requires addressing the condition that attracts the social engineer in the first place – data exposure. Companies that manage data exposure will reduce the attack surface, and thus, take the psychological advantage away from the social engineer.

Ethan Saia

Ransomware: Stealing your data for fun and profit

Ransomware is a form of malicious cyberattack that uses malware to encrypt the files and data on your computer or mobile devices. As the name suggests, the cyber-criminals behind the malware then make demands for a ransom in order to release your data or access to your data.

Typically, you will be given instructions for payment and will in return be given a decryption key. The ransom amount may range from a couple of hundred to thousands of dollars, though you will most likely have to pay the cyber-fraudsters in cryptocurrency such as Bitcoin.

3 Types of Ransomware

Ransomware attacks range from a mild to very serious. Here are the three types most often encountered:

Scareware

Contrary to its name, Scareware may be the least scary of the three. It involves a tech support cyber-scam via rogue security software. In this scenario, you may see a pop-up message on your screen claiming the security software has detected malware on your device and you can only get rid of it if you pay a fee.

If you do not pay, they will continue to bombard you with the same pop-up. But annoyance is the extent of the threat. Your files and device are absolutely safe and unaffected.

Note that legitimate cyber-security software will never solicit its users in this way. Real security systems don’t charge you on a per threat basis. They would never ask you for any payment to remove a ransomware infection. Afterall, you already paid them when you purchased the software. And logically speaking, if you never bought the software, how could it detect an infection on your device?

Screen Lockers

More of a real threat than scareware, screen lockers, can lock you out of your PC entirely. If you restart your computer, you may see a bogus, full screen United States Department of Justice or FBI seal with a message.

The message states that “they” have detected some sort of illegal activity on your computer, and you must pay the penalty fine. It should go without saying that neither the FBI nor any other government entity will lock you out of your device and/or demand money to compensate for illegal activity. Real suspects of crimes, whether perpetrated online or not, will always be prosecuted through legal channels.

Encrypting Ransomware

Unlike the other two, this ransomware mimics real, offline ransoms. In this scenario, cybercriminals snatch your files, encrypt them, and demand you pay a ransom if you wish ever to see your data again.

What makes this variation of ransomware attack so dangerous is once the cyber-fraudsters take your files, no security or system can really restore them. In theory, abiding by the ransom demands will return you data, but there are no guarantees. If they have your data and your money, you don’t have much leverage over them anymore. You can only hope these criminals are true to their word. It’s not a good bet.

How does Ransomware Work?

One of the most common methods to deliver ransomware is via a phishing scam. This is an attack when ransomware comes as an attachment within an email masquerading as a trusted source. Once you download the attachment, the ransomware takes over your computer. For example, NotPetya exploits loopholes in the system’s security to infect it. Some ransomware attachments come with social engineering tools to trick you into allowing them administrative access.

There are several actions ransomware might perform but the most common of them is to encrypt some or all of your files. You will get a message on your screen that your files got encrypted, and you can only decrypt them once you send an untraceable payment via cryptocurrency, usually Bitcoin. The only thing that can decrypt data is a mathematical key – that is in the possession of the cyber-criminal.

Another variation of ransomware attack is known as doxware or leakware. In this form of attack, the hacker will threaten to publically release sensitive data found on your hard drive unless you pay the ransom money. This is less common purely because finding sensitive data is often difficult and labor-intensive for cybercriminals.

Who Can Be a Ransomware Target?

Ransomware attackers choose the victims (individuals or companies) they target using several ways. The combination of the right victim and the loosest security will often drive a criminal’s decision.

For example, cyber-hackers may target universities or colleges because these institutions tend to deploy smaller security protocols. Additionally, they have disparate users relying on a lot of file sharing, making it easier for attackers to penetrate the defenses.

In other instances, some large corporations of organizations are tempting targets as they might be more likely to pay a ransom. For example, medical facilities and government agencies often require immediate access to their systems and files and can’t operate without access to their data.

Law firms and other agencies dealing with sensitive data will be more willing to pay to cover up the news of the ransomware attack on their network and database. These are also the organizations more prone to leakware attacks due to the sensitivity of information and data they carry.

However, even if you don’t fit any of the above categories, do not delude yourself. Some ransomware attacks are automatic and spread randomly without discrimination.

In Case You Are under Ransomware Attack

If you ever fall prey to a ransomware attack, the number one rule to remember is “Never Pay the Ransom.” This is also endorsed advice by the FBI. If you pay, all it is going to do is encourage these cyber-fraudsters to launch further attacks against you or others.

Does that mean you are stuck? Yes and No. You may still be able to decrypt or retrieve some of your infected files using free decryptors such as Kaspersky. However, many ransomware attacks use sophisticated and advanced encryption algorithms that fall outside of available decryptors. Even worse, using a wrong decryption script may further encrypt your files. Pay close attention to the ransomware message and seek an IT/security expert’s advice on what should be your next step or course of action.

An alternate method may be to download security software known for remediation. It will scan your computer to remove the ransomware threat. It is only a partial solution as this will clean up your system from all infections, but you may not be able to recover your locked or lost files.

A screen-locking ransomware attack often leaves little choice other than full system restoration. If this happens, you can always try scanning your computer using a USB drive or a bootable CD.

To thwart a ransomware attack in action, stay extremely vigilant. If your computer is slowing down for no apparent reason, disconnect it from the Internet and shut it down. Once you re-boot your computer (still offline), the malware will not be able to receive or send any commands from its control server. Without a channel to extract payment or a key for encryption, the ransomware infection may stay idle. At this point, download and/or install security software and run a full computer scan to quarantine the threat.

Specific Steps for Ransomware Removal

In case your computer comes under a ransomware attack, you must regain access and control of your device. Here are some simple steps you must follow, depending on whether you use Windows, MacOS, or a mobile device.

Windows 10 Users

  • Reboot your PC in safe mode
  • Install anti-malware software
  • Scan your system to detect the ransomware file
  • Restore your system to a previous state

MacOS

In the past, the rumor was that Macs were “unhackable” due to their architecture. Sadly, this is not the case. Cyber-criminals dropped the first ransomware bomb on MacOS in 2016, known as KeRanger. This ransomware infected an app known as Transmission that, and once launched, copied the malicious files which kept running covertly in the background. After three days of this stealth operation, it encrypted the user’s files.

Apple did come up with a solution to this issue known as XProtect. The lesson learned was that Mac ransomware is not theoretical anymore. However, Mac users are reliant on Apple to come up with solutions if problems occur.

Mobile Ransomware

It was not until the popularity of CryptoLocker in 2014 that ransomware became a common threat for mobile devices. Apps are the common delivery method for malware on phones. Typical mobile ransomware attacks display a message that your smartphone has been locked due to illegal activity and you will have to pay to unlock your device. In case you fall prey to such malware, you must boot your smartphone in safe mode and delete the malicious app to retrieve control.

How to Prevent Ransomware?

You can take several defensive measures that not only help prevent ransomware attacks but other social engineering attacks as well.

  • Keep the security software of your computer’s operating system up-to-date and patched. This simple practice will resolve many vulnerabilities and exploitations.
  • Do not install any software or grant it any administrative rights unless you are sure about what it does with those privileges.
  • Install an antivirus program to detect malicious programs (and apps) in real-time. A good antivirus may also offer you a whitelist feature (where you can allow rights to certain trusted software for automatic execution) to prevent unauthorized software from auto-execution in the first place.
  • Last but not least, back up your files automatically and frequently (preferably in a cloud). It won’t prevent a ransomware attack but it can control the damage and prevent permanent loss of your files.

In any event, in case you are not a tech-savvy individual or company, seek advice from IT and cyber-security experts in your locale. The best experts are up-to-date with current, commonly active ransomware as well as security software you should be using.

Social engineering in the workplace

Everyone is familiar with the case in which the proverbial “little old lady” is duped out of her life savings by a villain contacting her through the phone or email. The “Nigerian Scam” or “Advance-fee Scam” is once such classic scam you may know. The victim is offered a large sum of money on the condition that they help the scammer transfer money out of their country.  

The problem is that just knowing about these classic scenarios gives most people a false sense of security. The thought is, “It would never happen to me!” The first problem with this is that there are many types of these sorts of social engineering attacks that may not be so easy to recognize. The second problem is that most think this only happens at home.

In this article we will refresh our understanding of social engineering. We will review the currently known shapes and sizes of such attacks with a special focus on how they are used on employees in the workplace.

Social engineering: A review

Social engineering is a term that encompasses a broad-spectrum of notorious and malicious activities. The common, defining attribute is the ability to exploit the one weakness every person and organization has: human psychology. Instead of relying on programming and code, social engineering attackers use phone calls, e-mails and other methods of communication as their main weapon. They trick victims into willingly handing over either personal information, or an organization’s proprietary secrets and sensitive data.

Let’s focus on the seven most common social engineering attacks.

1.     Phishing

Phishing is one of the most common techniques. In most cases phishing uses fake forms and websites to steal vulnerable users’ personal data and login credentials. A phishing attempt commonly tries to accomplish one of three things:

  • Obtain sensitive and personal information such as names, date of birth, addresses, debit or credit card number, and Social Security Numbers.
  • Redirect users to malicious websites by creating misleading and shortened links and hosting a phishing landing page.
  • Incorporate fear, threats, and exploit a sense of urgency to manipulate the users into responding quickly without thinking rationally.
2.     Pretexting

As the name implies, in this social engineering attack, the fraudsters focus on creating a fabricated scenario or a good “pretext.” In a basic attack, the scammer typically claims they need certain information from you to confirm your identity. Once obtained, this information becomes the key to stealing your more personal data and/or to stage secondary attacks such as full identity theft.

In advanced pretexting, the target may be corporate. The key piece of information obtained may help them either exploit or abuse a company’s physical or digital weakness. For example, a cyber-fraudster may impersonate a third-party IT auditor and convince the targeted organization’s security team to grant them entrance into a secure building.

Pretexting fraudsters often masquerade as employees, such as HR or finance personnel. Such disguises help them access and target C-level executives. Verizon reported similar findings in its DBIR in 2019.

3.     Baiting

Baiting is somewhat similar phishing attack but is distinguished by the fraudster’s promise to giveaway an item or prize. Often the bait may be as simple as free movies or music downloads but will require the victim to hand over login credentials.

That’s not to say that baiting is strictly an online phenomenon. Baiters will use physical media when required. In July 2018, KrebsOnSecurity experienced and reported a baiting attack campaign that was targeting local and state-level government agencies within the United States. The attackers sent out envelopes that were Chinese postmarked and contained a compact disk (CD) along with a confusing letter. The idea was to exploit victims’ curiosity and have them use the CD containing malware that would infect their computer system.

4.     Quid pro quo

A quid pro quo attack is similar to baiting but whereas baiting promises goods, quid pro quo promises services. As an example, in recent years fraudsters impersonated the United States Social Security Administration. They contacted the targets, informed them there was an error in the system, and then claimed they needed the victims to confirm their Social Security Numbers. The ultimate goal was identity fraud using these credentials.

5.     Tailgating

Tailgating (also known as piggybacking) involves someone without any appropriate authentication following authorized personnel into a restricted area. Often the attacker may impersonate a delivery person and wait outside the target destination. When the unsuspecting employee gains access and opens the door to get in, the attacker will ask them to hold the door for him as well. This type of social engineering attack mostly targets mid-size enterprises as most large companies use keycards for building access.

6.     Watering hole

Just as animal predators wait by their prey’s favorite watering hole, cybercriminals target websites that may be popular with a target demographic in order to attack such visitors. If, for example, someone wanted to target financial services professionals, they might inject a popular financial site with malicious code. Merely visiting the site would compromise the website visitors’ browsers with code that could monitor the activities or even reach deeper into the system and control computer microphones and cameras.

7.     Vishing

Sometimes known as Voice Phishing, Vishing is a type of attack when a fraudster uses advanced IVR (interactive voice response) software on a standard telephone to entice you into repeating your confidential information on a recorded line. Vishing is not only about requesting your data; it crops your voice to over-come any voice-activated defenses that you may have access to within your company or for any services.

A common attacking technique used along with IVR is to prompt a victim to provide passwords and PINs. Each time the victim tries to enter a password or PIN, it will fail and notify the user that it is an incorrect attempt. This will cause the employee to panic and try several personal passwords. Hackers will harvest and exploit PINs and passwords later.

Ways to Recognize a Social Engineering Attack

A social engineering “ask” is often recognizable as one of the following:

Someone asking for assistance

Social engineers are good at using language that instills fear and a sense of urgency in you. The idea will be to rush you into performing an action with no time to think rationally. For example, someone who is urging you to carry out a wire transfer might be a scammer or hacker. Stop, think, and ensure that you will be conducting a legitimate transaction.

Asking for donations

Cyber fraudsters like to exploit your emotions and generosity by asking for donations for a charitable cause over the phone or through emails. They will also give you instructions on how you can send your donation to the hacker’s account. These social engineers may first research social media to learn the types of causes you support to better find a leveraging point.

Asking for information verification

Another notorious tactic that social engineers use is to present a problem that you can solve only by verifying your information. Often the problem requires the victim to fill in an online form asking for your personal information. The messages and form may look legitimate with all the correct branding and logos, but the moment you enter your information, the information immediately goes to social engineers.

Prevention from social engineering

There are five primary ways you can prevent yourself from falling for a social engineering attack:

Know your crown jewels

Learn the specific pieces of information, personal or corporate, that might be valuable to a social engineer or a hacker. Think of this information as the crown jewels. Identifying sensitive information allows you to set up walls to protect it.

In any corporate environment, the specific ‘crown jewels” may be different depending on department or person. Legal, IT and Finance may all have specific areas or sensitive data that others in the company may not have access to or even know about. This means social engineering protection applies to everyone.

Verify identities

Email hacking is a common threat that either imitates or takes control of legitimate email accounts. For example, if there is an unexpected request to take action online, ensure that the person you are dealing with is legitimate by calling that person and confirming that they have sent you the email message in question.

Slow down

Social engineers will go to the extreme lengths to instill panic, fear, and a sense of urgency in you. You must never let anyone rush you or prevent you from taking the time to consider carefully. See any effort to push you to take action quickly as a potential red flag.

Verify before your click

If you see a shortened link such as bit.ly link, etc., be wary. Such links are often used as carriers of malicious URLs or viruses. To verify if the link is legit, check it using a link expander. Search Google for “link expander” to see many resources that are easy to use.

Education

The most crucial and effective preventive measure is subject matter knowledge. Continue to educate yourself on current malicious tactics – they are always changing. If you are a business owner, educate your employees on social engineering threats. The health of your business may depend on it.