Skip to main content

Intelligence assured

Subscribe to future blog posts featured in our newsletter

Read more about how we handle personal data


Jonas Dellenvall, CTO at Advenica, shares his view on the security year ahead of us. Technological development is evolving in an incredibly fast pace with the increase of AI, machine learning and quantum technology.

Let’s start by looking back on 2018, how’s the year been? The NIS directive? GDPR? What were the effects? Did it turn out the way we expected?

2018 was the year when information security really landed on the management’s agenda. Instead of being something handled by the IT department, it became a strategic decision both in terms of revenue as an enabler for business, and in terms of costs/risks in order to avoid fines as a result of GDPR violation or a damaged reputation.

One of the biggest effects of GDPR is that many have made serious changes to their processes. Many countries are improving their cyber readiness by implementing the NIS directive, or by changing their national security regulations.

cybersecurity trends

So, what’s going to happen in the cyberworld during 2019?

The threats keep on evolving which means more states are becoming more competent with an increased ability and capacity. There are no signs that there will be any drastic changes of this trend, but rather a continuous growth. I would think that 2019 will become the year when we start to take a serious look at how we should approach the situation. Are we going to accept the risks – or are we going to do something about it - even if it means having to refrain from some of the benefits of digitalisation?

As a consequence of increasing threats, we now have to assume that one or more attackers will succeed – therefore it’s important to use solutions that will protect against threats even when they aren’t known yet. Building several layers of protection (defense in depth) with physical segmentation is one way of making it harder for attackers to succeed, no matter their capability.

What would you say is the most challenging cybersecurity issue in 2019?

Making sure that critical infrastructure is robust, in the sense that it can’t easily be brought down from a distance. The process will take several years, but it will definitely get started in 2019.

cybersecurity trends

The current state of the world means that the priority of civil and military defence will continue.

Cybercrime is constantly increasing. What needs to be done in order to break the trend?

The reason that crime is increasing is that it pays off - it’s as simple as that. For criminals, the chances of getting caught – and the consequences if you do so – are less palpable than the outcome of a successful attack. For a business operator it can cost more to fix the security flaws than to handle the losses. The most important player to change the situation is therefore from the political side - by changing the incentives for both criminals and business the tide can turn. The risk of being a criminal needs to increase – and at the same time change the incentives for business operators to actually enhance their security.

As a business operator or subcontractor, you have a responsibility to choose a robust solution – even if an unsecure solution could be more profitable short term. Choosing an unsecure solution will benefit criminals, which in the long term is costly for all of us.

cybersecurity trends

Which methods can be used in order to increase your preparedness for IT related cybercrimes?

There is not much new in that area. Make sure to have up and running routines for backups and security updates. By continuous security and consequence analysis, you can protect the right assets in the right way.

Cybersecurity is not a checklist or a state – it’s a process. By continuous and conscious work, it’s possible to achieve the right level of protection.

List your top three advice for an information security manager to think about in 2019

  1. Don’t risk it – make reoccurring security and consequence analysis in order to know what’s worth protecting and why. That is the only way to avoid both unnecessary risks and unnecessary costs.

  2. Assume that all protection can fail and design architecture accordingly. Use defense in depth and physical segmentation when needed.

  3. Do not allow yourself to be in the cyber criminality revenue chain. We are all part of the solution to create a world that is more secure.


Today, remote access is commonly used to connect with servers and networks at a distance. The functionality is there but the control is missing. Inadequate knowledge and lack of time to stay up to date about new security issues tend to make people rely on old routines.

A network administrator at an energy company is working from home and needs to access a server in the OT environment. What do you do? Use Microsoft’s Terminal Services via Remote Desktop Protocol, or RDP as most people call it. RDP is installed by default on Microsoft computers and is a well proven protocol that rarely results in compatibility issues. This fall, the FBI together with Department of Homeland Security (DHS) published a warning where they exposed the risks of RDP and the increase of attacks carried out against the protocol.

This is not at guide explaining how to minimise these risks. These are five steps for you to take control.

1. Shut the wide-open door

An unprotected RDP server is a cry for help. One of the greatest threats against RDP servers are so called brute force attacks. This technique allows hackers to systematically and automatically test different passwords until they find the right one to access the chosen user account. One scenario is that the hacker then extorts the organisation and demands a ransomware, but most often there are more harmful ulterior motives. The problem is not necessarily that the attacker has succeeded to get in, but how access to the server can be used in future springboard attacks, where they plant backdoors that can be used at a later time. Therefore, it is important to have a password policy that makes sure that the user account is temporarily blocked after a number of failed login attempts.

The weakness is not necessarily a weak password but the fact that you only use a password as a login method. To log in to accounts with administrative functions you should rather use two-factor authentication. It is also important to differentiate between being a user and an administrator. As an ordinary user, one should not have access to administrative functions and a list over which resources each user should have the right to connect to, needs to be in place.


2. Create overview and traceability

The first step is to control who you let inside our systems. Once they have been let in, how do you make sure that they do what they said they would do? If the activity in the systems isn’t logged or surveilled, the possibility to trace where an attack or threat came from is nearly impossible.

In an OT environment, there is often a combination of personal and functional accounts. By only allowing logins from personal accounts on the outside, i.e. remotely, you can create a complete audit trail and trace every mouse click on an individual level. Monitoring creates the opportunity to react upon unusual activity in time. Logging provides the opportunity to see what led to the unwanted activity in the first place. Without these functions, you won’t know if someone has been inside the system or what they have done. No matter how great of an overview you can create, it is still important to consider whether all functions should be available through remote access.

3. Keep an eye on the versions

Microsoft’s RDP has been around since 1996 and the protocol has been updated a number of times since then, but many of the earlier versions have lived on due to compatibility reasons. Older versions of the protocol, together with unsecure authentication methods create an opening for so called man-in-the-middle attacks. This means that a third part places itself between two parties that are exchanging information and has the opportunity to intercept or change the information without the other two parties noticing.

A first step is to use updated protocol versions and evaluate which of them should be allowed to use during remote access. Perhaps even more important is to get rid of the old versions so that they can’t be used in this type of down grade attacks. In short, avoid implementation errors, make sure that the services are continuously updated and configure accordingly.


4. Clearly define the border when sharing resources

Allowing unlimited sharing of resources between local and remote systems is usually the standard solution. These resources can be shared with drag and drop, UBS, printer or file sharing. How can we guarantee that the information that is being transferred is secure and can’t be used in a way that puts the security of the environment at risk?

Protecting sensitive information is perhaps not the OT environments foremost task but maintaining the integrity of the plant to make sure it functions without failure – is prioritised. A strict border between the environments means that you can control which information passes without depending on client-side implementations. Assume that the design of the user’s computer is not secure and take the necessary precautions.

The remote system should be kept separated from the local system and only allow the mere necessary actions to be able to carry out the intended work. You can make sure that the user will only be able to carry out keystrokes and clicks or just take part of screen shots and not be able to download the information as such. It’s not only about seeing who transferred what and when, but also to be able to control the user’s authority.

5. Guarantee secure remote access

The advanced attack techniques of today means that attack surfaces need to be reduced and thereby limit the remote access – but not exclude it. Only the users that need to have access should have access and under limited conditions. In that way, you can prevent malicious code from entering the protected network and ensure that sensitive information doesn’t leak out. Insufficient knowledge and lack of time to keep updated with new security flaws makes it more likely to allow current processes to carry on.

Insufficient knowledge and lack of time to keep updated with new security flaws makes it more likely to allow current processes to carry on.

At Advenica, we know that it requires more than that to ensure security, it requires a proactive solution where the security flaws have been highly taken into consideration. Guaranteed secure remote access does not have to equal lack of remote access. It is possible to create secure remote access – if it’s done with control. But that control is not always possible with a standard solution.