# User Awareness

## Preliminaries

Whenever users come into the context of computer-security, the terminology is “Awareness!”. The user must be aware of potential dangers/security-risks that might occure. In this article I will question this dogma!

Since 20 years the security community tells us that user awareness is essential to the security of systems. Since then nothing really changed. There are headlines every week of systems that got compromised due to misconduct or simply negligence, i.e., by ignoring guidelines for the complexity of passwords. A system has always a number greater 0 of employees, who do not follow guidelines or recommendations. The time has shown, that one employee – not following those guidelines or recommendations – is sufficient for a successfull attack. The key points I want the reader to think of are:

• Is it possible to remove the user from the security chain? Is it possible to implement technical security mechanisms, that enforce the compliance of policies or making it nearly implossible for the user to harm the system?
• How much effort does it take to make all users of a system aware of their actions regarding security? Is it worth the effort?
• How many users do I have to train at what specific level in order to get a secure system? Can I reach the same level of security with other mechanisms and less effort?

One thing in advance: There is no right or wrong in user awareness. In my opinion it depends on the context of the individual and the system!

One thing that helped me in the way of thinking about this challenge is the course “In­tro­duc­tion to Sys­tem Sa­fe­ty En­gi­nee­ring and Ma­nage­ment” at the Ruhr University Bochum. It offered instruments to help categorize situations/systems – system in this terminology is not only the technical part but also all interaction with the real world including employees and other persons – and giving them attributes. This is especially usefull in order to get answers to the type of quesitons: What happen if… It helps understanding a problem/risk/hazard.

## Definitions and Terminology

The following terms are theoretical and have nearly nothing to do with the reality. However, they are very usefull to generate limits, which help narrowing a concrete situation down.

### Types of persons

• $P_aware$: A person, whom is aware of every aspect regarding security. The person would never do anything that might potentially harm the system.
• $P_{notAwarea}$: A person, whom is not aware of anything regarding security. The person might do anything that can harm the system in any way.

A user - $P_s$ or $P_{ns}$ - is always not an administrator.

### Types of systems

• $S_ssecure$: A system, that is 100% secure in every terms of security. The system cannot be harmed even if a user wants to do that.
• $S_{notSecures}$: A system, that is not secure and can be exploited in any possible way.

A system - $S_s$ or $S_{ns}$ - is a device or service the user works with.

### Types of attackers

• $A_{local}$: A local attacker, whom is able to exploit every local security hole. This attacker may have gained physical access to the machine.
• $A_{rremote}$: A remote attacker, that is able to exploit every remote and local security holes. This attacker does not need physical access to take over a system, if it has security holes.
• $A_none$: No attacker in this scenario.

## What if…

Having the above terminology the following scenarios – without an attacker, with a local and with a remote attacker – can be build and analyzed. These scenarios will define the limits, which help narrowing a concrete situation down.

 # Person System Attacker 1 aware secure local 2 aware secure remote 3 aware secure none 4 aware not secure local 5 aware not secure remote 6 aware not secure none 7 not aware secure local 8 not aware secure remote 9 not aware secure none 10 not aware not secure local 11 not aware not secure remote 12 not aware not secure none

Let me explain the table in short step by step: Row 1 - 3 are secure in all terms due to the definition of $S_secure$ and $P_aware$.

Row 4 and 5 list a $S_notSecure$ with a $P_aware$. In these scenarios the userawareness reduces the size of the attack vector, but it does not eliminate it! There still exist other threats, i.e., like remote code execution.

Row 6 is like 4 and 5 with other scenarios. There can be a hard disk failure, connection failures, water damage, etc. which lead to loss of availability or intergrity.

Row 7 - 9 are basically like 1 - 3. Even though the user might handle without caution, the system is secure. This is interesting since the user might do anything he is allowed to, without beeing able to harm the system - see definition of the system.

Row 10 - 12 shows a totally insecure system where anything might happen.

### Discussion of the scenarios

This leads to some interesting points. On the one hand a system, that is 100% secure, cannot be compromised. On the other hand if the user is always aware of everything, the system might still be compromised by any threat that does not need interaction with the user. Examples are remote code execution, force majeure, water or fire, etc.

Is it therefore more promising to invest in a more secure system than into user awareness?

A system is never 100% secure. There always exist security risks. That is exactly the point where user awareness might play an important role. Is it more worth to train the user or to harden the system? This question cannot easily be answered since it depends on the size of the whole infrastucture/system.

One of the main question that got me here was: Does the picture of the chain, where the user is one part of the chain, still apply nowadays? In other words: Is it possible to keep the user out of any descision that has a security risk? Let’s consider a tiny system: A user has a clean pc and is not able to interact with other system - no internet, no wires, no wireless, … Quite simple. The following might happen without having a backup:

1. Possible loss of all data due to an electric shock.
2. User accidently deletes everything.

We - already in this tiny system - have things the user has control over and things that are out of his control. Now, the system can be extended to a full international cooperate network, but the basics stay the same. With each component/step/extension of the system you have to ask two quesitons:

1. Is this something the user should be responsible for?
2. Can we eliminate the user as a security risk by implementing technical mechanisms or by using security by design for future systems? See Row 7-9 of the table.

Over the last two centuries it was the user awareness that was considered to be the most important part of a system regarding security. Within these years nothing really happened. It is all still the same. The user is still a weak part of the system although the community tried to establish awareness. User-awareness would be a good idea if it worked…but it doesn’t…In my opinion the security community shouldn’t rely on the users way of thinking of security. The user didn’t change his mind about security and he won’t do it in the future. If the community relys on this dogma, it is doing a bad job. A system must be secure by design.

The security community should rethink this dogma. It may be more expedient to invest into mechanisms, that keep the user out of the chain in more and more parts of the system.

## Further notes

An often asked question is about contracts, that allow users to use private applications, mails and further usage of a business device. The problem here: The contract between the employee and the employer opens an attack vector. The elimination of that attackvector is complex. It is an attackvector that is build into the system by human law. However, it is not part of this article because it has nothing to do with user awareness in the first place.