Corporations have invested billions of dollars in “next gen” security devices built by some of the best minds in software today; but still, a handful of cyber threat actors – some of which have no college education – continue to hack their way into all types of organizations. The math doesn’t seem right.
Is the threat landscape changing so fast that every new technology immediately becomes ineffective? We think the answer is “no”. The threat landscape has remained pretty much static since 2006. The last major shift was the rise of client slide vulnerabilities and network controlled malware. But if the threat landscape is not changing that fast, why do current technologies fail to keep out the bad guys?
The most interesting question is not why current solutions initially miss these attacks, but how they are able to eventually catch-up? We all know that current security solutions miss zero day and polymorphic attacks, but then magically within some period of time (maybe days, maybe weeks) that same technology covers its mistakes and starts detecting the threat. Why is this? What exactly happens in those days or weeks?
The answer is simple human effort. Security companies employ armies of researchers who carefully analyze huge data feeds, binaries, web traffic, domains, etc. and then filter out the bad stuff by carefully removing the legitimate 99.99% of what they analyze. Signatures are created to identify the bad, and then distributed via a global database. This temporarily solves the problem, but today’s malware are polymorphic. They have the ability to change shapes quickly, which means that when these systems calculate a signature to match against their updated database, they are again fooled by the very same malware they missed in the first place.
Today’s threats are capable of evading current detection technologies but they are not able to evade the researcher’s mind. A researcher doesn’t depend on signatures to find new malware. He can’t, no signature has yet been created – that’s his job. So how does he find the malware? How does he find the needle in the proverbial haystack? What cognitive thinking does he use to classify the good from the bad, the malicious from the non-malicious? And why can’t we just codify that cognitive thinking and build better systems?
Well, people are trying. They are turning to the latest buzzword in technology: emulation, simulation, machine learning, artificial intelligence, big data, the list goes on… but the solution is much more complicated than just choosing a particular technology, it’s about the implementation. No malware detection technology is inherently good or bad, but there are plenty of bad implementations.
A great implementation translates the security researcher’s deep human knowledge into a codified solution. This is exactly what we are trying to do here at SlashNext. We use an Knowledge Based System (KBS) to simulate human cognitive thinking. Our system will only be as good as the minds of our researchers; but we have great researchers. They are the true un-sung heroes of the war against cybercrime. So the next time our system stops an attack in your network, think a few good thoughts for the guys whose brains are being simulated inside this system.