Vulnerability Management: False Positives, False Negatives, Technical, Logical Vulnerabilities and Human Error


At edgescan, we have delivered thousands of assessments over the past years and one topic which is both a commonly known weakness but also a source of concern is Accuracy of assessment

- The challenge being (human & technical);

  • Can the technology detect security weaknesses report accurate findings ?  
  • Can the technology avoid reporting issues that are not real? - "False Positives"
  • Can the technology miss critical issues and simply not report the weakness - "False Negatives"
  • In addition, once an issue is reported shall the human dismiss the issue as a "False positive" because they misunderstand or cannot reproduce the issue, resulting in a "False negative"


The majority of commercial and open source vulnerability scanning tools can not provide reliable results and require significant human validation which can also fail (as above).


Simple Vectors:

Most tools can accurately discover simple vulnerabilities sending a tainted request and analyzing the response. If the response is one of a number of typical expected responses signifying a vulnerability it is marked by the scanner as a vulnerable issue. - This assumes the scanner actually gets to scan the vulnerable parameter by virtue of knowing it exists in the first place......

Crawling/Coverage Challenge:

A scanner discovers an applications layout by Crawling/Spidering the site looking for Href and Links to other pages and invocations of HTTP methods. - Many scanners don't crawl applications very well and don't map the entire site. The is more and more the case not we have heavily front-loaded JavaScript-driven web applications / One-Page apps. - Poor crawling results in less than optimal coverage. The results in parts of a web application not being tested properly, if at all and leading us into the territory of "False Negatives".

Example issues:
CSRF Tokens Preventing CrawlingCross-Site-Request Forgery tokens need to be resent with every request. If the token is not valid the application may invalidate the session. Tokens can be embedded in the HTML and not automatically used by the scanner. This results in the scanner not crawling or testing the site adequately.

DOM Security Vulnerabilities:  Client-Side security issues which do not generate HTTP requests may go undiscovered due to tools only testing the application via sending and receiving HTTP requests. DOM (Document Object Model) vulnerabilities may go undiscovered as the tool does not process client side scripts.

Dynamically Generated RequestsContemporary applications may dynamically generate HTTP requests via JavaScript functions and tools which crawl applications to establish site maps may not detect such dynamic links and requests.

Recursive Links - Limiting Repetitive FunctionalityApplications with recursive links may result in 1000’s of unnecessary requests. An example of this could be a calendar control or search result function. This may result in 1000’s of extra requests being sent to the application with little value to be yielded.
Example:
/Item/5/view , /Item/6/view, /Item/7/view,..,..

Interpretation of results:
This challenge can be both as a result of human error or automation. Tools can misinterpret results by claiming there is a security issue when there is not (False Positive) or by not applying an appropriate request to detect a vulnerability (False negative). Humans can get it wrong also (as above).






Comments

Popular posts from this blog

Edgescan and Huawei - Cybersecurity - Irish Times Article and Panel Discussion

How Simple can it be.....XSS Prevention....