August 20, 2021

About IT system security: it’s best to learn from others’ mistakes

The security of information systems is a topic that is becoming as publically relevant as the jeans that have been in the closet for the past decade before becoming fashionable again. Unfortunately, we are reminded of it not by a celebrity or influencer, but rather by hacking stories in the media.

A lot of information has been analyzed and discussed concerning the security loopholes that have been exposed in recent months, when private information of Lithuanian car-sharing platform users was leaked. Once the first emotions calmed down a little, and by some the situation was even completely forgotten, it is worth remembering that such events can teach not only consumers who trust their personal data to companies, but also the companies that store and process that data.

Ensuring the security of IT systems is a continuous process

The sayings “built and forgot” or “don’t touch operating systems” are in no way compatible with IT security. Unfortunately, they can still sometimes be heard at mostly non-IT-oriented companies. Consistent maintenance of the systems and up to date implementation should be a general practice to ensure smooth running of IT systems.

You won’t find a world-renowned software vendor that has never left any security vulnerabilities in their products. However, the main feature of good software vendors is that when they find out about the vulnerability, they immediately fix it and release an updated version of their product.

A continuous negligence of security updates, especially when a company uses multiple IT systems, can lead to your business being an easy target. The situations involving WannaCry and NotPetya, which occured in 2017, serve as a great example of how, for one or another reason, non-installed operating system upgrades can bring even large companies down to their knees.

However, it is not only operating systems and application software that are facing adjustments. In recent years, we have repeatedly encountered security vulnerabilities in CPUs (Meltdown, Specter, and later variants). This is one of those situations when even IT professionals with years of experience would have not anticipated.

It is important to remember that timely system upgrades is just the first step. As time goes and people immerse themselves deeper into computer science, it becomes evident that some actions can be taken to boost the effectiveness, and some things used today won’t work in the future. Therefore, good practices used in the development of IT infrastructure or IT products are also changing. Often, change is driven by the same researchers or programmers studying IT security. If we used to think that being in a company’s internal network protected by multiple firewall layers made us safe, then now we came to realise that our “secure” internal network is not that secure. So in order to increase the security of consumer identities we should practise a “zero trust” model and promote strict access control.

Sometimes vulnerabilities are also found in protocols, or these are simply outdated. If a vulnerability is detected in the implementation of a protocol, it is often limited to either one manufacturer or one product of that manufacturer. However, if there is a loophole in the protocol, it will be present with all vendors that are using analogical software. This was also the case with SSL and previous versions of the TLS protocol. Given that these are the protocols designed to ensure the secure exchange of data each time a customer visits a website, the impact of such vulnerabilities can be very serious.

Cryptographic encryption and hash functions, which are the basis for secure data exchange and storage in information systems, also deserve a special mention. Even if no inherent problems or security vulnerabilities are detected in the feature, with processors naturally increasing speed, each encryption algorithm becomes inevitably easier to “hack.” This is one of the reasons why for a long time we have not used the DES algorithm to encrypt data, and we also may have to replace the widespread AES with something else.

5 years ago, a number of previously identified as safe sets of ciphers intended for establishing an encrypted data channel between a client and a computer can no longer really be called secure. The first practical implementations of collisions of the SHA1 hash function and the finding of more efficient methods also contributed to this.

If the SHA1 acronym looks familiar, the chances are that you might have seen it in the media, LinkedIn, or IT professional blogs in the last month.

Like other one-way hash functions, SHA1 was often used to store a representation of passwords without protecting the password itself, allowing the user to skip that function and proceed to the system as it would confirm that the user could log in when the result matched.

Due to the one-way function of the hash, it is not possible to extract the real password from the meaning stored in the database. However, it is possible to generate a huge list of passwords and the results of their hash function (more commonly known as a “rainbow table”), and knowing the hash value of the password you are looking for, you only have to check it with the available table. While this is not a simple process, due to the large number of publicly available rainbow tables, the actual password is supplemented with text, better known as “salt” or “pepper,” before the hash function is enabled, making it harder to search for the real password.

New technologies are not always compatible with old working methods

The emergence of the public cloud has made it possible for organizations to have the infrastructure they need quickly and to change it smoothly according to their current needs. It also enabled them to release the software to their customers much faster.

However, public cloud technology providers are making IT professionals learn things all over again as they now can choose from a number of ready-to-use services that previously needed to be configured from scratch. Similar re-learning situation applies to security professionals who need to take care not only of potential product gaps but also to set cloud platform configuration, manage interface and access rights. Otherwise, the doors that didn’t even exist can be left wide opened when storing data in your data centers.

There are many examples in the media of organizations that were storing their data on a public cloud infrastructure and their data was leaked. While the news headlines might have given the impression that cloud providers were to blame for this, the truth was that negligent configuration of the service allowed access to their files.

Over time, cloud computing vendors have implemented a number of assistive tools that automatically alert configuration professionals about configurations that do not match the best practices. This can certainly help the customers with very little experience in public cloud services. However,  the help of consultants with many years of experience in this type of project would assure the safety in the companies even more. It is simply necessary to get all the help you need, as the ultimate responsibility for configuration decisions will fall on the organization that protects user data.

Expensive equipment and tools only when used can be helpful

Large and expensive firewall, antivirus programs on each computer, centralized log storage – it seems to be everything the security consultants offered to purchase and the partners helped to set up. You must be safe now, right?

When the consultants and integrators completed the work, this statement would probably be true. However, one-off investments will not be enough to maintain the level of security. And not just for the reasons listed above.

Even if you have all the security measures configured to report automatically about suspicious activity, someone should respond to the generated messages and this should be done beyond the first month after new systems installation.

Over time, deviations from the initial configurations could happen when sometimes cutting corners and giving someone temporary access, disconnecting one or another IPS rule, or opening access to a server from the wide Internet. What could go wrong?

It is true that such temporary compromising actions at the time may bring more benefits than potential risks. The biggest problems arise when such temporary decisions due to negligence or proper abandonment of processes are forgotten and become the new standard.

Rights management and accounting

Access to information systems should be allowed to those who need that access, with rights no higher than those required for the performance of the work and by recording which user carried out the specific actions. This is one of the basic rules to protect your information systems and reduce damage even in the event of a hack.

Even with unconditional trust in employees, no one is protected from the login details of any of their subordinates or colleagues being stolen during a phishing attack or the same password being used by an employee in a previously hacked third-party system. A minimum set of assigned rights here will help to reduce the potential impact on systems. Moreover, proper authentication processes, the collection and storage of log records will make it easier to identify who managed to hack the system and how.

It should also be taken into consideration that the user may leave the company, so it should be clearly defined how his or her access to information systems is deactivated. Even if the employee leaves the company under the best circumstances, unrestricted access to IT systems becomes another hole that, over time, could be exploited by someone else.

The weakest company’s security link.

It is no secret that if all information systems are well maintained, configured using good modern practices and updated in a timely manner, intrusions into them from the outside are very complex and often carried out with very fresh (zero-day) vulnerabilities.

However, even the best-protected IT systems can often stumble upon a well-prepared social engineering attack if employees are not well prepared for it. Knowingly or unknowingly, sometimes driven by the best intentions, employees can become a part of company data leakage or disruption.

It is important to mention that IT professionals are not protected against indirect attacks due to human factor existence. Especially these days, when working from home is becoming a common practice. There is a high probability of opening gates to employer’s systems through vulnerabilities in personal equipment or the home network. That’s exactly what happened at the beginning of 2012, when LinkedIn was hacked. The hack was first executed on a virtual engineering server running on the same computer from which it connected to its employer systems.

As a result, IT security should not be limited to staff training, vigilance checks and proper control of the equipment that connects to the company’s IT systems.