Through the eyes of a lizard | Mollis Group Limited
information security, phishing, social engineering, cloud computing, cyber security, technology
977
post-template-default,single,single-post,postid-977,single-format-standard,bridge-core-2.0.9,ajax_fade,page_not_loaded,,qode-title-hidden,qode_grid_1300,qode-theme-ver-19.6,qode-theme-bridge,disabled_footer_bottom,qode_header_in_grid,wpb-js-composer js-comp-ver-6.7.0,vc_responsive

Through the eyes of a lizard

Former Cheltenham Researcher and friend & associate of Mollis Group “B4by L1z4rd” flips the narrative on Social Engineering for this month’s guest blog –

What is Social Engineering?

At its core, social engineering or ‘SE’ is the construction and leveraging of influence in order to manipulate people to behave as you want them to; making decisions to your benefit.

Anti-social engineers are the people and organisations behind the utilisation of our unique psychological traits to achieve these outcomes with malicious intent; in essence, Human Hacking.

Constructing suitable influence to manipulate people’s behaviours is achieved with the exploitation in the vulnerabilities of human nature. Often to either retrieve information perceived as sensitive or manipulate a person (or demographic of people) to deliver a specific effect.

These vulnerable psychological traits exist primarily in;

• The cognitive domain (how people uniquely perceive and analyse information);

• And, the social domain (an individuals attention to ethics, social norms, social protocols, and their incumbent societal landscape).

So, why do organisations and hackers Socially Engineer targets?

In the context of information security, the confidentiality principle is the keeping of a person’s or entity’s information private. Individuals and specific demographics become targets to retrieve sensitive information; given its perceived value, or the value procured with the information compromise. Examples include; information required in order to gain further access to systems, networks, physical locations, or for financial gain. The targets of a social engineer are often unaware of their beholden valuable information and are comparably careless about protecting it.

Conversely, individuals are not always targeted for their sole value, rather the collective impact. They are influencing popular attitudes and social behaviours on a large scale.

Hacking humans with Anti-Social Engineering.

The primary strategy to achieve Human Hacking is to reverse engineer an individual or population, to find the ‘source code’ of their cognitive and social domains.

Furthermore, consequently – utilise this understanding of a person’s unique programming to influence their actions.

What makes people so vulnerable to influence?

Humans make an extraordinary number of ‘decisions’ daily, and we genuinely think about only a few of these decisions in detail, let alone analyse them for consequences or their implications.

Making decisions is psychologically hard; we often do not have the time, energy, or patience, to fully consider every decision before we act upon it. So, we make neural shortcuts for ourselves, particularly when it comes to interacting with others. Social Engineering takes advantage of those shortcuts.

Cialdini’s “6 Principles of Influence,” used as the catalyst to exploit these shortcuts are:

Reciprocity

People naturally don’t like to feel indebted to others. When we’re the recipient of a favour, we tend to try and repay it. Human hackers attempt to offer the target something of value, with the intent to form an interaction, catch the interest and, hopefully, something they have.

Scarcity

People are more likely to want things that they believe are in limited supply, are exclusive, or that are not always available. The entire premise is to trigger a shortcut in the targets decision making.

Commerce examples include the special limited-time discounts on products a person didn’t know they wanted.

Authority

People do not like being uncertain and are ‘tribal’ in nature. We naturally look for and follow authority figures or a group with authority. The problem is that we have a broad definition of what constitutes an authority figure. Social engineers often act as an authority, impersonate the authority, or influence the authority themselves.

Liking

We listen to people whom we like and often look for synergies or ‘common ground’, this is a typical protocol of social interactions – based on the establishment of trust. This principle is why compliments can improve the odds of getting a future reciprocation.

Commitment

People like to maintain consistent behaviour, a routine. Because of this, a small action can lead to more significant actions. Often, this method is enabled by ‘preloading’ or ‘edging’ to trigger that small action.

Preloading is when ideas are ‘dripped’ to a target to act upon their subconscious. At the time of the planned manipulation, a neurological shortcut is triggered ‘i have seen this before, so I do not need to think about it’.
Edging is when small changes are forced towards an end goal, moving a persons’ normal’. To strategically reduce the margin between their previous behaviour, and the action required to enact – to bypass more consideration.

Consensus

People tend to do what they believe everyone in their proximity is doing, referred to as ‘group think’. Especially when they are unsure of what to do in the first place, this is the presupposition that the group must have more insight and therefore, are making informed decisions.

Unity

We gravitate toward people whom we identify as being similar to us, and it is also why we like it when we share an interest with somebody.

An anti-social Engineering attack

In practice, these principles of influence are often used in combination to form a basis for manipulation. There are fundamental facets of human behaviour which attackers exploit using the influence techniques already covered.

They are more than just attack vectors or bad behaviour; they are part of who we are as people. These principles modelled as the ‘four natures for social influence’ are the foundation for anti-social engineering attacks.

Uncomplicated Nature: Humans tend to filter out information they perceive as unimportant.

Assistive Nature: Humans tend to want to be helpful

Familiar Nature: Humans prefer, and let our guard down in, ordinary circumstances

Emotional Nature: Humans tend to allow emotions to influence or overpower decision making

Considering that a person is a biological machine, not indifferent to an electronic computer, this process of manipulation becomes more straightforward. Ultimately, humans with mathematical computational skills or “one who calculates” are computers both of which have Inputs (interfaces or senses), a decision mechanism (Central Processing Unit [CPU] or Striatum), and Outputs (interfaces or Actions).

It presupposes the method for hacking people is the same as hacking a computer. A misconception; computers are often exploited by seeking errors or flaws in how they operate, whether that be the code that is processed or how they process data itself.

It is impracticable to influence how a human operates directly at any moment in time. People are also far infinitely unpredictable; a discovered flaw in the operation of an exploited computer is repeatable. A human’s response to exploitation may never be the same again. Therefore, confidently manipulating a human is by the method of controlling the information and stimulus receive as inputs. Hacking humans is indirect in execution, not direct, like a digital device.

Methods of Social Engineering attacks

Phishing

Phishing is the most common type of social engineering attack that occurs in the modern-day. Endeavouring to accomplish three things:

• Obtaining confidential information. Often personal such as; names, addresses, and credentials.

• Redirect users to suspicious websites that host secondary attacks.

• Incorporate threats, fear, and a sense of urgency in an attempt to manipulate the user into responding quickly.

No two phishing attacks are the same. There are at least six different sub-categories (or methods) of phishing attacks.

• Deception – The impersonation of a legitimate company or individual in an attempt to steal people’s data or credentials. They are using the influence principle of authority.

• Spear Phishing – Attackers customise their attack with the target’s name, position, company, work phone number, and other information often derived from other forms of intelligence. In an attempt to trick the recipient into believing that they have a connection with the sender. They are utilising unity principles.

• Whaling – Human Hackers can target anyone in an organisation, even executives. In these attacks, they try to ‘harpoon’ an executive and steal their login details. Whaling attacks work because executives often do not participate in security awareness training with their employees and often hold valuable information.

• Vishing – or Voice Phishing; A Social Engineer can perpetuate this type of attack by setting up a Voice over Internet Protocol (VoIP) server. They mimic various entities in order to steal sensitive data and funds. There are few technological protections to enforce trust in telecoms networks, and it is very accessible to mascarade as another identity online.

• SMSishing – or SMS Phishing; This method leverages malicious text messages to trick users into clicking on a malicious link or handing over personal information. Attackers can insert their messages in existing chat conversations, making it more challenging to identify.

• Pharming – A Domain Name System (DNS) cache poisoning attack, a pharmer targets a DNS server and changes the Internet Protocol (IP) address associated with a trusted website. That means an attacker can redirect users to a malicious website of their choice. That is the case even if the victim enters the correct site name.

Some attacks are intentionally poorly crafted to the extent that their messages suffer from spelling and grammatical errors. They are targeting the societal ‘low hanging fruit’. A victim to this poorly crafted attack is an excellent target to focus further effort – in intelligence circles; designated a ‘UI’ (useful idiot).

Pretexting

Pretexting is another form of social engineering where attackers focus on creating a fabricated situation, that they use to try and lift their victims’ personal information. In these types of attacks, the attacker usually says they need individual parts of information from their victim to confirm their identity. In actuality, they acquire that data and use it to commit identity theft or secondary stage attacks.
Whereas phishing attacks mainly use fear and urgency to their advantage, pretexting attacks rely on building a false sense of trust with the victim.

Baiting

Baiting is the promise of an item, service, or outcome that malicious actors use to entice or blackmail victims. Baiters may leverage the offer of free music or movie downloads, for example, to trick users into handing their login credentials. Alternatively, threaten to release personal information acquired from other methods.
Baiting attacks are not restricted to online schemes, either. Attackers can also focus on exploiting human curiosity via the use of physical media. For example, a USB drive with an alluring label – influencing a victim to insert it into their PC, a vector for further attacks.

Tailgating

A social engineering attack-type sometimes called “piggybacking.” In these types of attacks, someone without the proper authentication follows an authenticated person into a restricted area. Not only limited to the physical world, but session hijacking is also an equivalent of digital tailgating.

How to protect from the social engineering threat

Social engineering is an attack vector that relies heavily on human interaction and often involves manipulating people. Become cognisant of the attack methods, the threshold of personal influence, and the sentiment of information received. A highly skilled anti-social engineer is impossible to anticipate. Manage the risk posed by a Human Hacker by ensuring that human factors are not the only vector of security, and ensure technological measures complement vulnerable psychological traits.