Tuesday, April 3, 2012

Http-Only is not secure [testing]

Its been a while since i posted. I've been bogged down with code reviews and training but even when you deliver training you learn something new. This is particularly true when training developers keen to learn secure development. The conversations during the course tend to be more about building than breaking....

HTTP - one side of a many sided coin
So on with today's rant......many penetration testers still feel testing an application surrounds testing the HTTP requests and responses between the browser an client; Crawl the application, flag interesting parameters and fuzz using a scanner like OWASP Zap proxy or whatever......
.......We hope the scanner renders the page as a browser sees it. If it doesn't how do we know the reaction of the application is being detected.

Many scanners parse HTML pretty well but when it comes to javascript/jquery/client-side-code-execution that's where they fall over.
One of the hardest things to do when automating scanning  is to understand, parse and interpret responses. Sending in data/payloads/attack vectors is the easy part, understanding the response is more difficult. If the response can come from more than one source...now that's a challenge, and a feature of many modern applications.

Our HTTP request can hit the client or the server or either one and be manipulated in many ways on both. So to say one vector/parameter/payload can split into many paths is not far from the truth. many paths mean many possible responses and contexts.

When I deliver training a significant part of it is related to client side encoding to prevent DOM XSS
This type of attack can't be detected with HTTP analysis in the traditional sense. Javascript parsing is required and tools like dominator do this pretty well but there is very little in the commercial field to tackle this type of assessment at scale.

So to cap this point off...testing Http-Only is bad, there is more to an app than Http requests and responses.

Another issue is testing for client side issues such as XSS (cross site scripting), XFS (cross frame scripting), clickjacking, is very reliant on the browser of choice, the version used etc.
HTML attributes for firefox are different to IE and Chrome are different for various versions and due to this payloads trigger on some browsers and not on others.

The browser protects us from lots of security issues like cross domain framing attacks, inline javascript attacks things like content security policy and X-FRAME-OPTIONS tell the browser not to accept or react to certain contexts.

The web browser is not only a window to the internet but is fast becoming a shield also to protect users by fulfilling contracts with the web application developers.

Suggestion to make life easier for developers:

By default server HTTP headers should implement:

  • X-Frame-Options: SameOrigin
  • Content-Security-Policy , 
  • HttpOnly, 
  • Secure (Cookie)
  • Strict-Transport-Security
  • no-store, no-cache

In the future, If we all use secure browsers should we let the browser take care of client side security issues and not bother to code taking such threats into account?  :)

To cap this off......Lets remove dynamic SQL, DES, <128 bit SSL from JAVA
and inline javascript from all browsers (data and code getting mixed).....what would we fix is we did this???

.......Just a thought.