thumbnail
This article is mainly for:
End-users
Investors
Vendors
Topics:
Tech Market
Published on:
07 December 2024
In their own words: Insights from 3 experts in cybersecurity evaluations

When you are choosing a cybersecurity solution, it’s very difficult to know in advance if it will perform as expected and with the level of quality that you need.

While starting with well-known brands featured in Magic Quadrants, Waves and Compasses is advisable, it’s definitely not enough. A demo, a trial and a subsequent proof of concept or value will definitely help after an initial selection, but of course, that’s not the same as using the solution in real life.

Of course, you can always make a short list out of what your professional network recommends, and even consider what your peers recommend on platforms like G2Crowd and Gartner Peer Insights but… that might be biased or not really compatible with your security architecture.

Then, what can Cybersecurity professionals do to better understand how their potential future suppliers can perform in real life scenarios and help them to improve their security posture? Enter the independent testing organizations, whose mission is to provide end users with an unbiased technical evaluation of Cybersecurity solutions and help them in their decision making process.

You might have never heard of them, but they are already influencing your choices, as many analyst firms and specialised media use their results as an evaluation criteria in their research and reports.

Who are they, and what are they trying to accomplish?

To shed more light on their role in the Cybersecurity industry, I had the opportunity of interviewing key people within three of the most known testing organizations: Simon Edwards from SE Labs, Andreas Clementi from AV-Comparatives and Mark Morgenstern from AV-Test. The last two have been in activity for more than 20 years!

Even though their goals differ to some extent, the three organizations share some common objectives: "Help users of IT security products by providing a reliable source for finding the best security solutions, whether for consumer and business use, across different platforms”, as Maik put it, and “foster continuous improvement amongst IT security vendors, supplying constructive feedback to security vendors, enabling them to refine and enhance their offerings continually”, as Andreas adds. The overall aim, as Simon concludes, is “helping to improve security for businesses and private individuals, tracking what the threat actors are doing and using that intelligence to test commercially available products and services”.

A survey published recently by AV-Comparatives shows that more than 60 % of their respondents, consider the independence of the testing organisation, the transparent disclosure of testing methodologies, and the free availability of the test results as key factors in considering the credibility of an evaluation lab.

Objectivity and independence are the most relevant trait of their work, according to Andreas, as this maintains the credibility of their services and positions them as a trustworthy and independent advisor in the fields covered by their evaluations.

Their path towards their goals requires continues improvement and commitment, as the market and threat landscapes are constantly evolving, and when reflecting on how they are achieving their visions, they are all optimistic: they serve millions of individuals and businesses across the globe, they regularly work with the largest cybersecurity vendors, and through direct engagement or the indirect usage of their tests, their evaluations are being used for making decisions by industry leaders and analysts.

"It's encouraging to witness our data driving decisions across various levels in the industry, from strategic decision making to product enhancements.”, reflects Andreas. "We've collaborated with numerous security product vendors, assisting them in enhancing their offerings to better secure users and companies worldwide.”, expands Maik.

Simon points out that, when engaging with cybersecurity vendors, they persist in trying to demonstrate the value that testing gives to their customers, "although in some areas of security vendors are either unused to testing or are resistant to being tested for purely (defensive) marketing purposes. "

In my opinion, this is a key aspect of their work. While independent benchmarks can be useful for end users to make decisions about their current or next cybersecurity solutions, vendors can benefit substantially from the work that AV-Test, AV-Comparatives and SE Labs conduct.

Collaboration and Communication between Testers and Tested

The direct collaboration between testing organizations and cybersecurity providers can generate invaluable insights to improve the security products. "Vendors are acutely aware that if a vulnerability or an area of improvement is outlined in our assessments, it is underpinned by painstaking research and rigorous testing.”, comments Andreas on this topic.

All of them are in regular contact with vendors, to facilitate smoother interactions, share results, and address issues on both sides. This is key, as while testers strive to provide accurate benchmarks and evaluations, there are always limitations and constraints.

However, all of them are aware of that, and they implement different strategies to mitigate potential shortcomings. Maik mentions that AV-Test’s methodologies “simulate and automate user actions, mirroring threat actors' behavior without shortcuts. This approach ensures a comprehensive evaluation while maintaining a balance between realism and repeatability.”

SE Labs, according to Simon, does things differently in their public tests, running red team tests, structuring them using the MITRE ATT&CK framework, so people can see exactly what they test, and using the same tools, techniques and tactics as the criminals. "We don’t automate so when we test we’re basically behaving the same as the bad guys. Our clients expect us to do everything a criminal would, aside from actually stealing money or damaging their systems. So we damage our own test systems instead!”, he adds.

AV-Comparatives also works with automation and red teams techniques depending on their tests and their goals. Andreas mentions that they focus on three main areas:

  • Real-world relevance, looking into make tests closely echo the actual threats, scenarios and conditions;
  • Volume of test-cases, as scale aids nuances and complexity, and provides a wider landscape for analysis and discovery of potential weaknesses, employing a statistically relevant number of test cases in our evaluations;
  • Freshness and distribution of test cases, as staleness can dilute the effectiveness of a test, so current threats are used, and the tests includes multiple families, to highlight the diversity in the current threat landscape.

How these tests are used and by whom?

Besides end users consulting the publicly available results, and vendors and testers communicating and managing feedback through the testers’ private platforms, there are other entities that use the evaluations for different purposes.

Users utilise tests as primary source for decision making, followed by reputable media, and then by forums and other users’ recommendations, according to the above-mentioned AV-Comparatives survey.

Media, in particular specialised magazines, tend to publish rankings and evaluations of different technologies, with some of them covered by the independent testing organizations. In many cases, the test results are used partially or even combined from different sources; in some, the media commissions a particular evaluation from the testers.

Besides media, another regular user of these tests are analyst firms, who even consider them as part of their knockout criteria for some reports and research. Firms like Gartner, Forrester, and others consult and/or refer to the results in AV-Test, AV-Comparatives and/or SE Labs, regularly.

When consulted about how accurately these 3rd parties reflect the test results in their publications, there are different points of view. There's a different consideration on how deep is the understanding of magazines and media compared to the industry analysts. "Analysts tend to have a very good grasp of methodologies”, comments Simon on this topic.

Maik, confirms that "While past misinterpretations have occurred, ongoing discussions with users of our results, such as computer magazines, help ensure accurate interpretations.”, and highlights the relevance and importance of "continuous learning and engagement with stakeholders".

On this topic, Andreas goes even further, reporting they maintain regular communication with industry analysts and journalists through scheduled calls and discussions, as "ensuring they utilise our results correctly and make the most of the unique insights we provide is essential for achieving maximum impact in industry assessments.

Virtually the whole industry uses the results from these tests, and inaccurate results can harm cybersecurity vendors and end users, so the focus of AV-Test, AV-Comparatives and SE Labs on ensuring the right usage of the evaluations by 3rd parties is paramount.

Other sources for cybersecurity solutions’ tests

Beyond the above, there are many other entities conducting different types of evaluations and certifications. Popular Youtube cybersecurity channels are regularly publishing their own tests and reviews, many times with dubious and unproven methodologies and questionable objectives, as in many cases they generate revenue from cybersecurity solutions affiliate programs.

There are, however, other serious testing frameworks these days, like the MITRE ATT&CK Engenuity evaluations, that have a different approach than the above-mentioned. You also have certifications and buying guides provided by specialised entities, like Stiftung Warentest or Consumer Reports.

They have slightly different purposes and some times, they are also indirectly influenced by the results of the independent testing organizations.

The MITRE ATT&CK Engenuity evaluations are very interesting because they focus on a particular threat, and run comprehensive tests using the MITRE ATT&CK framework to highlight how the cybersecurity solutions identify the breaches at every step and sub-step of the attack chain.

However, these evaluations are often misused by vendors. While in every evaluation, there’s a note to remind readers that these evaluations “don't compare or rate providers or tools”, vendors do present their results as if there was a ranking based on the performance of the solutions.

In a chat conversation with Allie Mellen, Principal Analyst at Forrester, we agreed that there seems to be a lot of consensus that the MITRE evaluations are misused by vendors, and that while they can be somehow useful, their laser-focused approach on one particular type of attack might not be representative of the actual performance of a given vendor in real life.

As several things that are considered in other methodologies aren’t relevant for the MITRE Engenuity evaluations, there’s the possibility of vendors "gaming the system”, and users need to be aware of that.

What it means when vendors don’t participate on tests

If you go now to the websites of AV-Comparatives, AV-Test and SE Labs (or even MITRE Engenuity), you’ll see that not every large vendor participates in their tests. Consulting the latest Business (or equivalent) evaluation in their websites, 9 providers participated in the most recent test from SE Labs, 16 in the one from AV-Comparatives and 14 on AV-Test’s. Different is the participation in the latest MITRE Engenuity ATT&CK evaluation, where 31 vendors were part of.

The high participation in the MITRE Engenuity ATT&CK framework is largely connected to a fact that’s not widely known: if a vendor isn’t there, it won’t be considered in other industry research and reports. That’s not the same with the other tests, where appearance in at least 1 of them is enough.

Simon recommends users to consider this in their conversations with vendors, saying that “If you don’t see your favourite vendor in our tests, it’s worth asking them why they aren’t included”. This is a good point as, in general, lack of participation on a respected test might be connected to the real-life performance of the cybersecurity solution. In other cases, it can be related to previous disagreements between tester and tested on the evaluation methodology.

The latest score isn’t all, consistency is everything

In my opinion and experience - where I have been on the vendor side of things - there’s something very important to consider when using the test results in a decision-making process to select a cybersecurity solution: consistency.

The performance that a vendor shows across different tests, and over time, is far more important than the results in one particular test.

It’s common that cybersecurity providers will make a lot of noise about the results of one evaluation where they performed at the top and ignore those where they have scored poorly in the past.

That behaviour makes sense from a business point of view, but it limits the ability of the user to have a clear understanding of the potential real-life performance of a vendor. When you compare one vendor across different tests, you will see that their performance isn’t the same. For instance:

  • Sophos scored 100 % in SE Labs receiving an AAA Award in their Essential Endpoint Security for Enterprise evaluation, received 6 points in Protection from AV-Test in the Windows 10: December 2023 test, but had 98 % Protection Rate in AV-Comparatives’ latest Business Security test.
  • On the other hand, Crowdstrike wasn’t evaluated in the latest AV-Test, while it was tested by SE Labs and AV-Comparatives with diverse results.
  • SentinelOne didn’t participate in any of the latest business editions of their evaluations (but did in some previous editions)

The participation in most of the tests, the consistency across them (considering different methodologies are used) and their performance over time are the factors that will help an end user to understand what results they can expect from the cybersecurity vendors they are considering to implement.

Conclusion

The role of independent testing organizations is extremely relevant for the Cybersecurity industry and for the end users as they are the only ones that can give decision makers a technical opinion on how vendors could perform in real-life.

Over the years, entities like SE Labs, AV-Comparatives and AV-Test have regularly improved their frameworks to provide the best results they can to help users make decisions and collaborate with vendors to strengthen their solutions.

The results of these organizations are freely and publicly available, as Simon Edwards highlights, so there’s no obstacle in their utilisation. Maik highlights that prioritising, or at least considering, those solutions certified by their organizations will help the end users to identify vendors that are committed to rigorous testing. Andreas highlights that these results are a guide, not a definitive rules, as the ideal solution will always be contingent on the context of every user or organisation.

I definitely agree with them on all those points, and more end users should understand how much value there are in this independent tests, how to interpret them and use them regularly. Moreover, it would be great to see more cybersecurity areas being evaluated as rigorously as those covered by these entities.

That will provide more data points for end users that, combined with industry analyst reports and peer review platforms, will help them to make informed choices and, subsequently, improve their overall security posture. This is the endgame of everyone involved: make sure everyone is as secure as possible.

About the author

Tom Bastiaans

team@thecyberhive.eu

This user did not specify their phone number

Comments

This article has not been commented yet.

Do you want to leave a comment?

Login or register to proceed

Login Register