The Army is recruiting smart young soldiers to wage cyber war.
But human talent is not enough.
Ultimately, say experts, cyberspace is so vast, so complex, so constantly changing that only artificial intelligence can keep up.
America can’t prevail in cyberspace through superior numbers.
We could never match China hacker for hacker.
So our best shot might be an elite corps of genius hackers whose impact is multiplied by automation.
Talent definitely matters – and it is not distributed equally.
There’s no other military profession, from snipers to pilots to submariners, that has such a divide between the best and the rest, he told last week’s International Conference on Cyber Conflict(CyberCon), co-sponsored by the US Army and NATO.
One of the major lessons learned from the last 18 months standing up elite Cyber Protection Teams, he said, is the importance of this kind of “super-empowered individual.”
Such super-hackers, of course, exist in the civilian world as well.
One young man who goes by the handle Loki “over the course of a weekend…found zero-day vulnerabilities, vulnerabilities no one else had found in Google Chrome, Internet Explorer and Apple Safari,” Carnegie Melon CyLab director David Brumley said.
“This guy could own 80 percent of all browsers running today.”
Fortunately, Loki’s one of the good guys, so he reported the vulnerabilities – and got paid for it – instead of exploiting them.
The strategic problem with relying on human beings, however, is simple. We don’t have enough of them.
“We don’t want to be in a person-on-person battle because, you know what, it just doesn’t scale,” Brumley told CyCon. “The US has six percent of the world’s population (actually 4.4).
Other countries, other coalitions of countries are going to have more people, (including) more people like Loki.”
That creates a strategic imperative for automation: software programs that can detect vulnerabilities and ideally even patch them without human intervention.
Brumley’s startup, ForAllSecure, created just such a program, called Mayhem, that won DARPA’s 2016 Cyber Grand Challenge against other automated cyber-attack and defense software.
However, that contest was held under artificial conditions, Brumley said, and Mayhem lost against skilled humanhackers – although it found some kinds of bugs better and faster. So automation may not be entirely ready for the real world yet.
Even when cybersecurity automation does come of age, Brumley said, we’ll still need those elite humans.
“What these top hackers are able to do… is come up with new ways of attacking problems that the computer wasn’t programmed to do,” he said.
” I don’t think computers or autonomous systems are going to replace humans; I think they’re going to augment them.
They’re going to allow the human to be free to explore these creative pursuits.”
The emergence of AI in cyber security
Machine learning and artificial intelligence (AI) are being applied more broadly across industries and applications than ever before as computing power, data collection and storage capabilities increase.
This vast trove of data is valuable fodder for AI, which can process and analyse everything captured to understand new trends and details.
For cyber security, this means new exploits and weaknesses can quickly be identified and analysed to help mitigate further attacks.
It has the ability to take some of the pressure off human security “colleagues.”
They are alerted when an action is needed, but also can spend their time working on more creative, fruitful endeavours.
A useful analogy is to think about the best security professional in your organisation.
If you use this star employee to train your machine learning and artificial intelligence programs, the AI will be as smart as your star employee.
Now, if you take the time to train your machine learning and artificial intelligence programs with your 10 best employees, the outcome will be a solution that is as smart as your 10 best employees put together.
And AI never takes a sick day.
It becomes a game of scale and leveraging these new tools can give enterprises the upper hand.
AI under attack
AI is by no means a cyber security panacea.
When pitted directly against a human opponent, with clear circumvention goals, AI can be defeated.
This doesn’t mean we shouldn’t use AI, it means we should understand its limitations.
AI cannot be left to its own devices.
It needs human interaction and “training” in AI-speak to continue to learn and improve, correcting for false positives and cyber criminal innovations.
This hybrid approach already has proven itself to be a valuable asset in IT departments because it works efficiently alongside threat researchers.
Instead of highly talented personnel spending time on repetitive and mundane tasks, the machine takes away this burden and allows them to get on with the more challenging task of finding new and complex threats.
Predictive analytics will build on this by giving security teams the predictive insight needed to stop threats before they become an issue as opposed to reacting to a problem.
This approach is not only more cost effective in terms of resources, but also is favourable for the business due to the huge reputational and financial damage a breach can cause in the long term.
Benefits of machine learning
Alongside AI, machine learning is becoming a vital tool in a threat hunter’s tool box.
There is no doubt machine learning has become more sophisticated in the past couple of years and will continue to do so as its learnings are compounded and computing power increases.
Organisations face millions of threats each day, so it would be impossible for threat researchers to analyse and categorise them all.
As each threat is analysed by the machine, it learns and improves.
This not only helps protect organisations now, but compiles this valuable data for use in predictive analytics.
However, just staying ahead of the hackers and the threats they pose is not enough to protect organisations as the new vulnerabilities and new devices that come online will make this more and more difficult.
The continued and enhanced standardisation on data formats and communication standards is crucial to this effort.
Once data flows and formats are clearly defined, not just technically but also semantically, machine learning systems will be far better placed to effectively police the operations of such systems.
The industry needs to work towards finding the sweet-spot between unsupervised and supervised machine learning so that we can fully benefit from our knowledge of current threat types and vectors and combine that with the ability to detect new attacks and uncover new vulnerabilities.
Much like AI, machine learning in threat hunting must be guided by humans.
Human researchers are able to look beyond the anomalies that the machine may pick up and put context around the security situation to decide if a suspected attack is truly taking place.
Young Humans – young military
“For those of you who are in the military who are 25 years old or younger, captains and below…you’re going to have to lead the way.
People my age do not have the answers,” the Army’s Chief of Staff said at CyberCon.
After his speech, Gen. Mark Milley called up to the stage lieutenants and West Point cadets – but not captains, he joked, “you’re getting too old.” (He let the captains come too).
The Army has rapidly grown its cyber force.
It now has 8,920 uniformed cyber soldiers, almost a ninefold increase since a year ago (and cyber only became an official branch three years ago, when it had just six officers).
There are also 5,231 Army civilians, 3,814 US contractors, and 788 local nationals around the world. All told, “there’s 19,000 of them,” Milley said. “I suspect it’s gonna get a lot bigger.”
At the most elite level, US Cyber Command officially certified the Army’s 41 active-component Cyber Protection Teams and the Navy’s 40 teams as reaching Full Operational Capability this fall, a year ahead of schedule. (We’re awaiting word on the Air Force’s 39).
At full strength, the teams will total about 6,200 people, a mix of troops, government civilians, and contractors.
To speed up recruiting, Gen. Milley wants to bring in cyber experts at a higher rank than fresh-out-of-ROTC second lieutenants – say, as captains.
Such “direct commissioning” is used today for doctors, lawyers, and chaplains, but Milley notes it was used much more extensively in World War II, notably to staff the famous Office of Strategic Services (OSS).
Why not revive that model?
“There’s some bonafide brilliant dudes out there. We ought to try to get them, even if it’s only 24 months, 36 months,” he said.
“They’re so rich we won’t even have to pay ’em.”
(That last line got a big laugh, as intended, but “dollar-a-year men” have served their country before, including during the World Wars.)
No matter how much the military improves recruiting, however, it will probably have enough talent in-house. (
Neither will business, which is short an estimated two million cyber professionals short worldwide).
So how does the military tap into outside talent?
One method widely used in the commercial world is bug bounties:
paying freelance hackers like Loki for every unique vulnerability they report.
(Note that the Chinese military runs much of its hacking this way.)
The Defense Department has run three bounty programs in the last year – Hack the Pentagon, Hack the Army, and Hack the Air Force – that found roughly 500 bugs and paid out $300,000.
That’s “millions” less than traditional security approaches, says HackerOne, which ran the programs.
What’s really striking, though, is the almost 3,000 bugs that people have reported for free.
Historically, the Pentagon made it almost impossible for white-hack hackers to report bugs they find, but a Vulnerability Disclosure Policy created alongside the bug bounties “has been widely successful beyond anyone’s best expectation,” said HackerOne co-founder Alex Rice, “without any actual monetary component.”
So what’s motivating people to report?
For some it’s patriotism, Rice told me, but participating hackers come from more than 50 countries. In many cases, he said, hackers are motivated by the thrill of the challenge, the delight of solving a puzzle, the prestige of saying they “hacked the Pentagon,” or just a genuine desire to do good.
The other big advantage of outsourcing security this way, said Rice, is the volunteer hackers test your system in many more different ways than any one security contractor could afford to do.
“Every single model, every single tool, every single scanner has slightly different strengths, but also slightly different blind spots,” Rice said.
“One of the things that is so incredibly powerful about this model is that every researcher brings a slightly different methodology and a slightly different toolset to the problem.”
Those toolsets increasingly include automation and artificial intelligence.
Automation & AI
“I’m the bad news guy,” Vinton Cerf, co-inventor of the Internet, told the audience at CyCon. “We’re losing this battle (for) safety, privacy, and security in cyberspace.”
“The fundamental reason we have this problem is we have really bad programming tools,” Cerf said.
“We don’t have software that helps us identify mistakes that we make…..
What I want is a piece of software that’s watching what I’m doing while I’m programming.
Imagine it’s sitting on my shoulder, and I’m typing away, and it says ‘you just created a buffer overflow.’”
(That’s a common mistake that lets hackers see data beyond the buffer zones they’re authorized for, as in the Heartbleed hack.)
Such an automated code-checker doesn’t require some far-future artificial intelligence.
Both use what are called “formal methods” or “formal analysis” to define and test software rigorously and mathematically.
There are also semi-automated ways to check a system’s cybersecurity, such as “fuzzing” – essentially, automatically generating random inputs to see if they can make a program crash.
Artificial intelligence doesn’t have to be cutting-edge to be useful.
The Mayhem program that won DARPA’s Cyber Grand Challenge, for instance, “did require some amount of AI, but we did not use a huge machine learning (system),” Brumley said.
“In fact, NVIDIA called us up and offered their latest GPUs, but we had no use for them.” Mayhem’s main weapon, he said, was “hardcore formal analysis.”
“There is a lot of potential in this area, but we are in the very, very early stages of true artificial intelligence and machine learning,” HackerOne’s Rice told me.
“Our tools for detection have gotten very, very good at flagging things that might be a problem.
All of the existing automation today lags pretty significantly today on assessing if it’s actually a problem.
Almost all of them are plagued with false positives that still require a human to go through and assess (if) it’s actually a vulnerability.”
So automation can increasingly take on the grunt work, replacing legions of human workers – but we still need highly skilled humans to see problems and solutions that computers can’t.