Recent NSA leaks show challenge of a software ‘solution’ for insider threats
Two recent thefts of NSA documents were made possible simply because workers who handled sensitive material decided to walk out the door with some of it, serving to highlight challenges facing the U.S. intelligence community as it seeks to implement, and in some cases create, next generation insider threat programs.
Former U.S. intelligence officials tell CyberScoop the rudimentary nature of these incidents makes it extremely difficult to create programs that keep material secure without negatively impacting workforce morale.
“It’s impossible to totally stop from what I can see,” said a former U.S. intelligence official who spoke on condition of anonymity to discuss their experience. “There’s just way too many people walking in and out for nothing to get stolen.”
Newly released court documents provide details about recent leaks of classified documents to The Intercept, a national security focused news publication known for its work with Edward Snowden. A transcript of an interview with law enforcement suggests that Reality Winner, an NSA contractor, printed out several sensitive NSA documents, hid them in her underwear and then walked out of a Georgia-based NSA facility before sharing them with journalists. She also allegedly sought to contact the news publisher while at work.
The accusations against Winner echo those facing another contractor, Harold T. Martin, who allegedly brought troves of classified material from Fort Meade to his home in Glen Burnie, Maryland. Martin was charged by the Department of Justice with “willful retention of national defense information” earlier this year. He pleaded not guilty in February. It’s not clear why Martin took classified documents home. It’s believed he too walked the material out of an NSA building and did so undetected for a much longer period of time.
Concerns about these types of incidents are not new for the U.S. intelligence community.
While insiders say that today’s insider threat detection technology is far from a silver bullet, advancements in this software has the potential to inform administrators of risks, industry executives told CyberScoop.
In the past, some U.S. intelligence agencies instituted weekly, random personnel searches at facility exits to see whether employees carried sensitive material. But the effort received almost immediate backlash from personnel. Employees quickly voiced their frustration and before long, physical searches became less frequent. These experiences in part encouraged the introduction in some cases of software designed to discover so-called “early indicators” of leakers; thereby helping to avoid needless physical inspections.
One of the inherent issues in relying on software as the first line of defense, however, is that it’s often not foolproof. The technology’s existing pitfalls, experts tell CyberScoop, relate to a lack of visibility, analysis integration, data collection and a contextualization of the gathered information.
“With more lines of code, more expansion of social networks and the more we openly share across those platforms, the need to ‘trust but verify’ to prevent insider threats increases exponentially,” said Joshua Douglas, chief strategy officer of Raytheon. “[In the future] physical and digital access control technologies will look like a scene from ‘Minority Report.'”
Douglas continued, “[effective insider threat] security depends on user behavior analysis, data introspection of highly sensitive assets and converging cyber and physical security … the first step is to establish high visibility – something most enterprises do not have – or if they do, it’s not very good.”
Visibility, in this context, refers an employer’s ability to either indirectly or directly spy on an employee’s activities. Today, the monitoring of in-house workforce stations, business printers and other employer-owned devices is standard. In Winner’s case, watermarks left by a printer reportedly provided some clues about where the leaked documents came from. Other elements of the day-to-day work environment are also captured.
Depending on whom you ask in industry, contemporary insider threat detection software has existed for roughly between 10 and 20 years — with the latest iterations being more capable of analyzing separate events, from lateness to work to odd internet exploring behavior, into assessment reports. Industry experts say the future requires far greater integration though.
“From a technology point of view, minimizing the risk of classified material walking out the door due to insider threats and compromised accounts require a number of different technologies working in concert with each other,” explained Steven Grossman, vice president of strategy for cybersecurity firm Bay Dynamics. “Some of those technologies include data loss prevention, asset management, data classification, privileged access management, encryption, CASB and proxy. What’s common about all of them is that they are challenging and resource intensive to implement.”
Computer spies vs. human spies
While big contractors like Raytheon and Northrup Grumman have developed some of these software solutions, the market is full of smaller brands with innovative products that align with the government’s interests.
On Wednesday, Dtex Systems, a San Jose, California, tech firm with less than 100 employees, announced it had been awarded a two-year contract to provide insider threat detection products to the Defense Information Systems Agency (DISA). The firm’s products provide endpoint monitoring and behavioral analytics capabilities. A recent press releases notes that Dtex Systems software is equipped to detect and flag odd lateral movement, unusual use of legitimate credentials and simultaneous login attempts.
“Securing both the public and private sectors has never been a greater challenge than it is today,” said Dtex Systems CEO Christy Wyatt. “When you are dealing with real people, you need to remember that they themselves can be unpredictable. So the great challenge is being able to cast a broad, scalable network that enables you to find that user intelligence that can see through the real risks and cut through the noise. In many cases, today’s tools have a hard time distinguishing real risk vs false positives and so they fall back on baseline rules.”
While insider threat detection technology is quickly evolving, it remains nascent and unreliable. This reality is evident to both developers and users.
“The intelligence by which systems operate must be improved, without an understanding of context then poor decisions are made which makes automation difficult,” explained Brandon Swafford, chief technology officer for Forcepoint, which offers insider threat software.
In most situations, detection programs must be fed large amounts of data to give administrators an accurate assessment on whether user behavior is abnormal on a given day. In addition, the question of how, from where and when these programs collect data from employees, and what is relevant, is also difficult to answer.
“Most detection systems rely on low context data and lack context which creates high false positives and low value,” Swafford said. “Combining multiple sources of information to provide context to a user’s actions on a system can help reduce the friction a person encounters, as poor understanding leads to simple controls that are less effective and create delays in workflow.”
Another significant hurdle is integration, both with physical assets like security cameras and other software programs that may be running on an employee’s computer.
“There are few tools that provide visibility in the public cloud (AWS, Azure) or with SaaS providers like Salesforce, Workday, Office 365,” Swafford described. “This creates a huge gap once the data leaves the company and moves to the provider.”
As capabilities improve and integrations become available, developers insist that their detection software will allow for rapid action. But until then, gaps will continue to exist and other mechanisms will be necessary.
“More than 70 percent of all insider breaches are never persecuted because the organization cannot prove who did it or how it was done. And even if they can, it is very hard to establish intent,” Wyatt told CyberScoop. “We are strong believers that what matters most is getting answers, not alerts, in real time, and in a form that is actionable.”