Psychology is the social engineer’s best friend

Social engineering cyber-attacks have rocketed to the forefront of cyber-security risk and have wreaked havoc on large and small companies alike. Just like a Renaissance actor drawn to Shakespeare’s genius work, the modern social engineer is attracted to the ever-growing pool of information fueled by data brokers. These criminals ply their trade by exploiting the vulnerabilities of an individual and their tactics are known as phishing, baiting, scareware, and tailgating, just to name a few. What is so unique about the social engineer is that their methods are designed to take advantage of the common traits of human psychology.

Social engineers may simply send phishing emails to the target of their choice, or they could work to build a relationship with the target in person, through conversation, or even through spying. Most victims are only guilty of trust. For example, take the case of Barbara Corcoran, famous Shark Tank judge. She fell victim to a phishing scam in 2020 resulting in a loss of roughly 400,000 USD. The social engineer simply posed as her assistant and sent emails to her bookkeeper requesting renewal payment on real estate investments.

In order to combat social engineering, we must first understand the nuances of the interaction between social engineer and target. First and foremost, we must recognize that social engineering attacks are a kind of psychological scheme to exploit an individual through manipulation and persuasion. While many firms have tried to create technical barriers to social engineering attacks, they have not had much success. Why? Social engineering is more than a series of emails or impersonations. It includes intimate relationship building – the purposeful research and reconnaissance into a person’s life, feelings, thoughts, and culture. The doorway to social engineering success is not a firewall – it is the human response to stimuli. As such, we should analyze these attacks through a psychological lens.

In Human Cognition Through the Lens of Social Engineering Cyber Attacks, Rosana Montañez, evaluates the four basic components of human cognition in psychology centered around information processing: perception, working memory, decision making, and action. Together, these pillars of cognitive processing influence each other and work together to drive and generate behavior. To illustrate by way of example: when driving on a highway, you must first evaluate your surroundings. Where are the cars around you? Is there traffic ahead? What is the speed limit? Next, you must use your working memory to pull information from past experiences. The brain sends out a code; last time there were no cars around you, and you were below the speed limit, you were able to change lanes to go faster. With this new information, you now have a decision to make. As the driver, you use this information, and perform the action of changing lanes.

In the context of cyber-attacks, social engineering is a form of behavioral manipulation. But how is the attacker able to access the complex system of cognition to change the action and behavior of the target? To further dissect cognition, Montañez considers how “these basic cognitive processes can be influenced, for better or worse, by a few important factors that are demonstrably relevant to cybersecurity.” These factors are defined as short and long factors and may be the opening that attackers can leverage to strengthen the success of their attack. Short term factors include concepts of workload and stress. Long term factors evaluate age, culture, or job experience.

In a recent study, researchers evaluated phishing behavior and the likelihood an employee would click a phishing link. It was found that those who perceived their workload to be excessive were more likely to click the phishing email. Cognitive workload causes individuals to filter out elements that are not associated with the primary tasks. More often than not, cyber security is not actively thought about and therefore results in the greater likelihood of being overlooked. This effect is known as inattentional blindness and restricts a person from being able to recognize unanticipated events not associated with the task at hand.

Stress also may be responsible for weakening the ability of an employee from recognizing the deceptive indicators that are present in cyber messages or phishing emails. Other factors such as age or culture, domain knowledge, and experience have anticipatory principles that can determine the likelihood for being deceived. As most would expect, having more cyber-security knowledge and experience in a given job reduces the risk of cyber-attacks victimhood. Similarly, as age increases there is a decrease in risk for cyber-attacks because of job experience and accumulated cyber-security knowledge. However, eventually the impact of age and experience reaches a plateau and inverts when seniors (with less experience in modern technology) become exposed. Interestingly, gender or personality were inconclusive when evaluating their impact on cyber-attack susceptibility.

So how do we go about defending against cyber-attacks and improving the untrustworthy mind? The short answer is we don’t. As the age-old security acronym PICNIC suggests, the Problem exists “in the chair” and “not in the computer.” Across many different studies and the experiences of companies themselves, training methods that ask people to make conscious efforts to defend against social engineering cyber-attacks have been unsuccessful. If technological barriers don’t work and cognitive responses can’t be changed, then what is the answer? The solution requires addressing the condition that attracts the social engineer in the first place – data exposure. Companies that manage data exposure will reduce the attack surface, and thus, take the psychological advantage away from the social engineer.

– Ethan Saia

Become a Subscriber to receive timely articles on human-centric security issues:

Scroll to Top