Monday, June 4, 2012

A stitch in time....

Our Traditional approach to penetration testing, even large scale global penetration testing is to perform an annual/bi-annual pen test on our web applications.

Question is who said once a year is enough?
Most applications undergo at least quarterly updates and changes if not to provide value for customers but to ensure the web applications are fresh and to address any (hopefully) minor bugs.

Cyber attackers can perform a continuous scan on your site to detect changes (code drops) and probe such changes to assess if any vulnerability has been introduced.

Why do we think it is acceptable to perform a time-limited test of an application to help ensure security when a determined attacker may spend 10-100 times longer attempting to find a suitable vulnerability.

The main reasons for a one-off test per year are simply economics:

  • Testing takes resources
  • Resources cost money
  • Resources are scarce
  • Push to deploy is stronger than push to secure
  • Organisations feel they may be left alone
  • A one-off annual test ticks the compliance box
Loses due to cyber-crime in 2011:
Russian Cyber Crime Figures –$4.5 billion (2011)
Euro Zone Cyber Crime - €750 billion (2011)
UK - £43.5 billion – 2011

There should be a more cost effective and reasonable solution which could at least raise-the-bar for web application security.

I see a trend in continuous monitoring, what does this mean?

Using systems that continually spider/crawl and assess a web application, compare it to the last sweep and detect changes in the target.
Changes detected are assessed for any potential introduction of new vulnerabilities and risk.
Changes detected can be correlated with changes to the site: Code-drops, malware injection, unauthorised deployments, scheduled maintenance.

Scanning alone wont work:
Continuous monitoring also requires manual validation of all discovered issues or "grey issues" which require further assessment. The benefit is the engine "learns" over the years how the site is structured and tailors its testing methodology and result parsing to that particular site.
A continuous monitoring engine may also be an "expert" system; it uses its feedback and manual tuning to learn how to assess more accurately as it assesses. As the engine accuracy increases the manual intervention in some cases will decrease.

There is one area which is a hard nut to crack. This is the area of business logic security testing. Testing applications from a business logic and role based authorisation standpoint can be difficult for a machine to figure out. The majority of such testing requires knowledge of the business flow and also the context of the data. Some systems for example may appear to have a CSRF (Cross Site Request Forgery) vulnerability but in reality the application do not perform straight-thru processing, a person checks all submitted transaction requests.

Develop in a secure manner:
Another activity which can take some of the heat off is secure application development.
Many of the worlds larger organisations that developer awareness and training are some of the most cost effective methods to assist with developing secure software.
Teach developers about secure API's, Client and server side threats.

Get Serious...
Also tell executives of the potential sanctions if they fail to meet data protection requirements!!

EU directive:

Article 23,24 & 79, - Administrative sanctions
“The supervisory authority shall impose a fine up to 250 000 EUR, or in case of an enterprise up to 0.5 % of its annual worldwide turnover, to anyone who, intentionally or negligently does not protect personal data”

Threat modelling is also very powerful; Decomposing a software design into components and looking at an application from an Abuse, Misuse and Negative use-case standpoint. Understanding trust boundaries and areas in the application which either hold or invoke/process significant data within the application.
This can be a high level or as low as you want, but when use cases are agreed within the team which could result in a security issue the technical team understand why a particular security control needs to be implemented.

Tuesday, April 3, 2012

Http-Only is not secure [testing]

Its been a while since i posted. I've been bogged down with code reviews and training but even when you deliver training you learn something new. This is particularly true when training developers keen to learn secure development. The conversations during the course tend to be more about building than breaking....

HTTP - one side of a many sided coin
So on with today's rant......many penetration testers still feel testing an application surrounds testing the HTTP requests and responses between the browser an client; Crawl the application, flag interesting parameters and fuzz using a scanner like OWASP Zap proxy or whatever......
.......We hope the scanner renders the page as a browser sees it. If it doesn't how do we know the reaction of the application is being detected.

Many scanners parse HTML pretty well but when it comes to javascript/jquery/client-side-code-execution that's where they fall over.
One of the hardest things to do when automating scanning  is to understand, parse and interpret responses. Sending in data/payloads/attack vectors is the easy part, understanding the response is more difficult. If the response can come from more than one that's a challenge, and a feature of many modern applications.

Our HTTP request can hit the client or the server or either one and be manipulated in many ways on both. So to say one vector/parameter/payload can split into many paths is not far from the truth. many paths mean many possible responses and contexts.

When I deliver training a significant part of it is related to client side encoding to prevent DOM XSS
This type of attack can't be detected with HTTP analysis in the traditional sense. Javascript parsing is required and tools like dominator do this pretty well but there is very little in the commercial field to tackle this type of assessment at scale.

So to cap this point off...testing Http-Only is bad, there is more to an app than Http requests and responses.

Another issue is testing for client side issues such as XSS (cross site scripting), XFS (cross frame scripting), clickjacking, is very reliant on the browser of choice, the version used etc.
HTML attributes for firefox are different to IE and Chrome are different for various versions and due to this payloads trigger on some browsers and not on others.

The browser protects us from lots of security issues like cross domain framing attacks, inline javascript attacks things like content security policy and X-FRAME-OPTIONS tell the browser not to accept or react to certain contexts.

The web browser is not only a window to the internet but is fast becoming a shield also to protect users by fulfilling contracts with the web application developers.

Suggestion to make life easier for developers:

By default server HTTP headers should implement:

  • X-Frame-Options: SameOrigin
  • Content-Security-Policy , 
  • HttpOnly, 
  • Secure (Cookie)
  • Strict-Transport-Security
  • no-store, no-cache

In the future, If we all use secure browsers should we let the browser take care of client side security issues and not bother to code taking such threats into account?  :)

To cap this off......Lets remove dynamic SQL, DES, <128 bit SSL from JAVA
and inline javascript from all browsers (data and code getting mixed).....what would we fix is we did this???

.......Just a thought.


Monday, February 13, 2012

How Simple can it be.....XSS Prevention....

Cross Site Scripting is sill a very common web vulnerability.
Generally it is used to attack clients/users.

It can be used for malware upload, botnet hooking, keylogging, a payload delivery system for clickjacking and CSRF attacks and much much more, all for 6 easy payments of $9.99...sorry got carried away there :)

But is is easily preventable. You dont even have to know what XSS (type 0, type 1, type 2, DOM, Stored, Reflected) is to prevent it.

One pretty simple way to prevent XSS is to use the OWASP ESAPI (Enterprise Security API). A very easy tool to use/invoke.
It's also managed and attended to by Chris Schmidt....A great guy...

Regardless of what it does....if there was a mandate to use it on all redisplayed external input a site could become virtually XSS free!! (all for 6 easy payments of......).

It's easy to deploy....

1. Include in JSP (Java version)
2. Invoke in JSP
3. Job done!!!

We include it by

<%@ page import="org.owasp.esapi.ESAPI" %>

We invoke it by

<div><%=ESAPI.encoder().encodeForHTML(contentQuery)%></div> we are escaping HTML element content.

But here be Dragons........
Yes there is an exception to server-side escaping of input....DOM XSS.
Input rendered on the clients which does not touch the server is fair game for XSS'ers!!!
So data sent into the browser which is rendered via client side code. No server interaction needed.

An example of this are URI fragments or Anchors. These are not required for DOM XSS but demonstrate server side reflection is not required (and escaping for that matter on the server is ineffective).

Remember "Learn to code HTML for Food" (otherwise know as the W3C)...well anchors or URI fragmants do not get sent to the server.

Anchors (#) as opposed to query parameters (?) hang about the client (in small groups, causing trouble).
So for server side validation:<script>alert()</script>  ß This fragment is dangerous. Server does not see this. We need some client validation!!<script>alert()</script>  ß This is escaped on server. Need server (and client) validation (depending where the input emanates from).
So we need some additional client side controls. This is particularly relevent for RIA applications.

Bring forth ESAPI4JS!!

Simply put ESAPI4JS escapes client side input from external sources.

Again very easy to use:

Import into your page:

<!-- esapi4js dependencies -->
<script type="text/javascript" language="JavaScript" src="/esapi4js/lib/log4js.js"></script>
<!-- esapi4js core -->
<script type="text/javascript" language="JavaScript" src="/esapi4js/esapi.js"></script>
<!-- esapi4js i18n resources -->
<script type="text/javascript" language="JavaScript" src="/esapi4js/resources/i18n/"></script>
<!-- esapi4js configuration -->
<script type="text/javascript" language="JavaScript" src="/esapi4js/resources/"></script>



Jim Manico and myself shall be covering this and lots of other security jiggery pokery @

Sec App Dev - Belgium March 7th
OWASP AppSec DC 2012

Thursday, February 9, 2012

ESI - Enterprise Security Intelligence

"Are we secure?...."
A major issue with enterprises is "are we secure?" (what does that even mean...). If you are asked by the CEO whilst sharing a lift to the 10th floor,what do you answer??? eh..em no...well sort-of.....

A few important aspects in attempting to figure out "Are we secure?" from an web security standpoint are

(1) How do we make sure the security of our current public facing Internet web landscape is *pretty* robust (not 100% secure)? - Test, maintain, patch, measure, observe.....

(2) So how do we make sure systems in design/development are not going to introduce new risk to your business? - Security: Design, Dev,Test, Review, Deploy, Maintain, Patch.

..........So how do we track ongoing assurance efforts, prioritization of technical issues, appoint appropriate risk, track remediation, identify root cause, technology adoption weakness, mixed with securing new deployments (1) & (2) above? - Excel Spreadsheets, Memory, Belief, Faith, Luck.....

"Risk comes from not knowing what you're doing." - Warren Buffet

Inputs into (1) & (2) above are generally in the form of technical reports and an appointed risk context defined by the consultant. Enterprises with many business units (BU's) may use different consultants, varying reports styles and format, variance in risk and in effect the overall organisation faces a challenge in pulling all this information together

10 Business Units
30 Security Staff
200 Web Applications
1000 Web Servers
2000 Data bases
100,000 Client records
1000000 Potential hackers, Worms, Trojans (and infected users)

.......So all you got to do is make sure we have no security vulnerabilities which may give rise to a data breach or damage the reputation of the it?"

Convergence of information: 
We are getting towards consolidation of risk using GRC solutions but not quick enough in the web application space.
Many solutions are available in both the commercial and open source arena. We have for example Archer etc on the commercial side. We have some small orgs with great potential such as Onformonics ,We have Open source contributions such as The Denim Groups Threadfix and integrated solutions such as WhiteHat Sentinel (DAST) which provide a portal solution and integration via an open XML API.
Basically, "If you can not measure it, you can not improve it."

The idea behind ESI is the ability to track, be informed, measure, prioritize, visualize, appoint contextual risk of your enterprise technology stack and deployments as a whole. Regardless of all the problems we have with application security we certainly cant get off this moving train of vulnerability, it continues to move on. All we can do it identify meaningful issues with our environment and attempt to fix and prevent. With ESI at least we can see the state of our landscape for what it is and try to improve it.

Friday, February 3, 2012

Website Insecurity: This grinds my gears....

This document reflects my personal opinion on the state of application security. It calls out what I see are the weaknesses of our approach as a community to addressing the issue of web [in]security. Web [in]security is a healthy and growing industry and rather than verification of issues we constantly find and are exposed to new threats without every addressing the current ones en-masse…….

A long ,long time ago we used to “test security out” when it came to web applications. This meant performing a time limited penetration test on a web application in the hope you could find all the existing vulnerabilities using the skills, resources and tools at your disposal….Oh, actually we still do this..

There are weaknesses to this approach and can be reflected in the current state of internet, application, cyber security. To be honest the issue of web [in]security is getting worse. Despite growth in the security industry (solutions, vendors, consultants) and the fact that application security is more mainstream than it ever was the way in which we address this problem has not changed very much.

"Insanity is doing the same thing over and over and expecting different
- Albert Einstein

Below are some of the issues the way I see it….

Time & Tools of the trade:
Time is limited to perform the test, the tools available are limited (who has all the tools available?):
We have got to remember "a fool with a tool is still a fool" but tools are important when conducting technical security assessments….Tools are flawed and can be out of date with current issues; the tester very rarely “tunes” their tools to the target application.

Tools do not test application logic, business logic or authorisation very well (if at all) as context is required which needs to be understood; this is far beyond the reach of any tool to date.

In the era of Rich Internet Applications (RIA) the approach, traditionally used, to perform web application testing is flawed. More and more functionality is moving back to the user’s browser (AJAX, HTML5, JS Frameworks). Traditional tools focus on HTTP requests and manipulation, bypassing the client code completely and leave whole portions of the web application untested. Very few tools perform JavaScript/Binary/flash parsing and this layer of the stack is getting ignored. I’m sure this will change but it needs to do more than “keep up” in order to improve matters.
The robustness, coverage and ultimately accuracy of the testing relies on the skill of the consultant that performs the assessment, the tests conducted and the tools used; this leads onto consistency problems…..

We can’t guarantee any two testers will find the same issues (human element) on the same application and also apply the same risk to the discovered issues. I’ve managed teams of over 100 testers’ globally on large engagements and the variance in quality is huge. Scalability does not lend itself well to quality. Coupling the “human element” with variance in skill and experience leads to massive issues regarding a consistent approach to testing applications.

Comedy of Errors
If these weaknesses (above) inconsistent approach and weak tools are amplified by high volumes of testing the issue gets much worse (a comedy of errors). I think we need to remember that application security is a sub domain of an engineering discipline called computer science!! Looking at the current approach and the weaknesses associated I don’t believe what is currently done can be really called science, more like “best endeavours”.
…So our approach is a little flawed to be polite.

Industry Growth!= More Secure (well, actually less secure)
Another point to demonstrate the above flaw is the Penetration testing “industry”. It has grown, estimated to have more than doubled globally in the last 10 years. But the problems with internet and web security have only gotten worse.
….Throwing money at the issue is not making much of an impact.

Invisible (and expensive) Deliverable
Our deliverable is invisible. A secure application is not noticed; it works, taken for granted. Its only when security does not work security gets noticed. It is difficult to sell security when there does not seem to be any tangible output. It’s sort of like an insurance policy; you only notice it when you pay for it and when you actually need it. (You might be forced to have it via compliance also).
*Good* skilled penetration testers are expensive due to the fact they are limited in supply and it is hard to “learn” penetration testing, it really comes with experience. There is also a distinct lack of security folks who can write code.
…So the deliverers of the “invisible deliverable” can be hard to justify and can be expensive. 

It’s not working let’s approach this in a different manner:
So like any sane person we need to change strategy to do something that works….or we could keep doing what we are doing (what we are currently doing?)
So if we were to replace the “test security out” activity with something else what would that be? ”I know, let’s build security in”: Secure application development, Training, Code review, Static analysis, SDLC security etc…sounds like a great idea….wow a great idea (Nothing new there...).

Who would have thought that reviewing code (where the vulnerabilities exist) is a good idea and a logical place to prevent security issues!!

So the “build security in” industry has now grown in a rapid manner. We have non profit organisations like OWASP and commercial enterprises alike, all trying to solve the same problems. When I started reviewing source code for security issues in 2002 security code review was akin to waving a dead chicken over the keyboard, it was not so mainstream.

Source code review and static analysis is not particularly new but its adoption rate is still significantly less than penetration testing. Penetration testing will be/is a commodity; everyone is doing it (not all to the same standard). Anyone can be a penetration tester!! Particularly if you hide behind a good brand!
Source code review is different. It requires a better understanding of the technology, the language, associated frameworks coupled with penetration testing knowledge and understanding of risk. But who said this was easy??

A new Approach to solving an old problem
We understand that despite a growing industry the problems in the wild are only getting worse. Time limited approaches due to financial and market pressure coupled with lack of awareness and a vastly varying skill and tool set of the security consultant community certainly does not help.
So let’s propose a novel solution (not really very new but you would not thin it):
·         SDLC Security:
A repeatable structured approach which reflects organisations method of development, the frameworks used and technologies utilised. Structured and repeatable means less error prone and relies less on the skill of the individual (to some extent). It would cover off the following

  • Secure Design: Designed to help ensure the software architecture is appropriate. The appropriate controls are in the correct places within both the client and the server sides of the application.
  • Developer Training: Get the development community involved in secure application development. Raising the bar of code delivered. Remove low hanging fruit.
  • Common Module/Framework design and implementation: Using core common components for various security functions such as canonicalization, input validation, encoding and error handling.
  • Code review: Manual and Static analysis tools: Manual review of the code using a risk based approach focusing on the application perimeter and tracing eh dataflow inwards.
  • Integrated Functional / Security testing /Anti-Functional testing: Negative use cases. Testing aimed at running exception paths in the code.
Fixing discovered issues by virtue of a penetration test is also known as “Bolting on security”. Point fixes of discovered issues which may not have been thought through, may break other issues. 
In the case of a addressing a security issue with a root cause being a design flaw this is even worse as it is generally more expensive to fix and retrofit.

In general, fixing issues, as a result of a penetration test is more expensive and error prone. We should try to build and detect security issues as part of the development and test phases of the life cycle. With finite resources and finance it is best to prevent issues from occurring rather than detecting issues after they occur.

·         High Volume, Consistent, Cost effective, semi-automated penetration testing.
Using a tuned vulnerability scanner, we can understand the coverage, areas of weakness of the scanner, vulnerabilities covered and not covered. It is automated; consistent approach, proven (over time), maintained testing engine and rule-set. After all we need to identify and fix vulnerabilities. The scanner is tuned over time in order to improve efficiency and accuracy.
  • Manual verification of discovered issues; all issues require verification for exploitability and risk rating.
  • Manual business logic and authorisation testing (to some extent); Business logic testing will require manual testing but this can also be integrated into System testing
  • Consistent risk analysis: Assessing the risk of an issue with sufficient business context of the application.
I suppose you may notice that manual penetration testing is not in list above. That is because it has not proven to work. The manual effort is used in the SDLC and in the verification of the runtime scanning.
A Point in time
So once a web application undergoes testing that is in effect a point-in-time test.
Once the application undergoes maintenance, functional change or even cosmetic change in the case of some RIA applications vulnerabilities may be re introduced.

"App Radar":
What is required is a frequent and low bandwidth continuous scan if possible. Such a solution would provide an “App Radar” effect by detecting changes in the application as they happen. Such changes can be used to compare deltas between various points in time as to monitor the organic growth and change of the application over time.

 Root Cause Analysis:
Changes in application behaviour from a security perspective can be traced internally to the source code change control process and hence assisting with root cause discovery and definition.

Near-Immediate testing:
Another benefit of continuous scanning is as new rules and tests are discovered they can be deployed and used to test the application as part of the continuous exercise.

All of this feeds into Enterprise Security Intelligence (ESI)
More about this next time................