Monday, February 13, 2012

How Simple can it be.....XSS Prevention....

Cross Site Scripting is sill a very common web vulnerability.
Generally it is used to attack clients/users.

It can be used for malware upload, botnet hooking, keylogging, a payload delivery system for clickjacking and CSRF attacks and much much more, all for 6 easy payments of $9.99...sorry got carried away there :)

But is is easily preventable. You dont even have to know what XSS (type 0, type 1, type 2, DOM, Stored, Reflected) is to prevent it.

One pretty simple way to prevent XSS is to use the OWASP ESAPI (Enterprise Security API). A very easy tool to use/invoke.
It's also managed and attended to by Chris Schmidt....A great guy...

Regardless of what it does....if there was a mandate to use it on all redisplayed external input a site could become virtually XSS free!! (all for 6 easy payments of......).

It's easy to deploy....

1. Include in JSP (Java version)
2. Invoke in JSP
3. Job done!!!

We include it by

<%@ page import="org.owasp.esapi.ESAPI" %>

We invoke it by

<div><%=ESAPI.encoder().encodeForHTML(contentQuery)%></div>
.....here we are escaping HTML element content.

But here be Dragons........
Yes there is an exception to server-side escaping of input....DOM XSS.
Input rendered on the clients which does not touch the server is fair game for XSS'ers!!!
So data sent into the browser which is rendered via client side code. No server interaction needed.

An example of this are URI fragments or Anchors. These are not required for DOM XSS but demonstrate server side reflection is not required (and escaping for that matter on the server is ineffective).

Remember "Learn to code HTML for Food" (otherwise know as the W3C)...well anchors or URI fragmants do not get sent to the server.

Anchors (#) as opposed to query parameters (?) hang about the client (in small groups, causing trouble).
So for server side validation:

http://127.0.0.1:8080/vuln/XSS.jsp#q=<script>alert()</script>  ß This fragment is dangerous. Server does not see this. We need some client validation!!
http://127.0.0.1:8080/vuln/XSS.jsp?q=<script>alert()</script>  ß This is escaped on server. Need server (and client) validation (depending where the input emanates from).
So we need some additional client side controls. This is particularly relevent for RIA applications.

Bring forth ESAPI4JS!!

Simply put ESAPI4JS escapes client side input from external sources.

Again very easy to use:

Import into your page:


<!-- esapi4js dependencies -->
<script type="text/javascript" language="JavaScript" src="/esapi4js/lib/log4js.js"></script>
<!-- esapi4js core -->
<script type="text/javascript" language="JavaScript" src="/esapi4js/esapi.js"></script>
<!-- esapi4js i18n resources -->
<script type="text/javascript" language="JavaScript" src="/esapi4js/resources/i18n/ESAPI_Standard_en_US.properties.js"></script>
<!-- esapi4js configuration -->
<script type="text/javascript" language="JavaScript" src="/esapi4js/resources/Base.esapi.properties.js"></script>

 Use:
org.owasp.esapi.ESAPI.initialize();

document.write($ESAPI.encoder().encodeForHTML(queryparam));

Jim Manico and myself shall be covering this and lots of other security jiggery pokery @

Sec App Dev - Belgium March 7th
OWASP AppSec DC 2012

Thursday, February 9, 2012

ESI - Enterprise Security Intelligence

"Are we secure?...."
A major issue with enterprises is "are we secure?" (what does that even mean...). If you are asked by the CEO whilst sharing a lift to the 10th floor,what do you answer??? eh..em yes..er no...well sort-of.....

A few important aspects in attempting to figure out "Are we secure?" from an web security standpoint are

(1) How do we make sure the security of our current public facing Internet web landscape is *pretty* robust (not 100% secure)? - Test, maintain, patch, measure, observe.....

(2) So how do we make sure systems in design/development are not going to introduce new risk to your business? - Security: Design, Dev,Test, Review, Deploy, Maintain, Patch.

..........So how do we track ongoing assurance efforts, prioritization of technical issues, appoint appropriate risk, track remediation, identify root cause, technology adoption weakness, mixed with securing new deployments (1) & (2) above? - Excel Spreadsheets, Memory, Belief, Faith, Luck.....


"Risk comes from not knowing what you're doing." - Warren Buffet


Inputs into (1) & (2) above are generally in the form of technical reports and an appointed risk context defined by the consultant. Enterprises with many business units (BU's) may use different consultants, varying reports styles and format, variance in risk and in effect the overall organisation faces a challenge in pulling all this information together

1 CISO
10 Business Units
30 Security Staff
200 Web Applications
1000 Web Servers
2000 Data bases
100,000 Client records
1000000 Potential hackers, Worms, Trojans (and infected users)


.......So all you got to do is make sure we have no security vulnerabilities which may give rise to a data breach or damage the reputation of the organisation..got it?"



Convergence of information: 
We are getting towards consolidation of risk using GRC solutions but not quick enough in the web application space.
Many solutions are available in both the commercial and open source arena. We have for example Archer etc on the commercial side. We have some small orgs with great potential such as Onformonics ,We have Open source contributions such as The Denim Groups Threadfix and integrated solutions such as WhiteHat Sentinel (DAST) which provide a portal solution and integration via an open XML API.
Basically, "If you can not measure it, you can not improve it."


The idea behind ESI is the ability to track, be informed, measure, prioritize, visualize, appoint contextual risk of your enterprise technology stack and deployments as a whole. Regardless of all the problems we have with application security we certainly cant get off this moving train of vulnerability, it continues to move on. All we can do it identify meaningful issues with our environment and attempt to fix and prevent. With ESI at least we can see the state of our landscape for what it is and try to improve it.
   







Friday, February 3, 2012

Website Insecurity: This grinds my gears....



This document reflects my personal opinion on the state of application security. It calls out what I see are the weaknesses of our approach as a community to addressing the issue of web [in]security. Web [in]security is a healthy and growing industry and rather than verification of issues we constantly find and are exposed to new threats without every addressing the current ones en-masse…….

 
A long ,long time ago we used to “test security out” when it came to web applications. This meant performing a time limited penetration test on a web application in the hope you could find all the existing vulnerabilities using the skills, resources and tools at your disposal….Oh, actually we still do this..

There are weaknesses to this approach and can be reflected in the current state of internet, application, cyber security. To be honest the issue of web [in]security is getting worse. Despite growth in the security industry (solutions, vendors, consultants) and the fact that application security is more mainstream than it ever was the way in which we address this problem has not changed very much.

"Insanity is doing the same thing over and over and expecting different
results."
- Albert Einstein

Below are some of the issues the way I see it….

Limitations
Time & Tools of the trade:
Time is limited to perform the test, the tools available are limited (who has all the tools available?):
We have got to remember "a fool with a tool is still a fool" but tools are important when conducting technical security assessments….Tools are flawed and can be out of date with current issues; the tester very rarely “tunes” their tools to the target application.

Tools do not test application logic, business logic or authorisation very well (if at all) as context is required which needs to be understood; this is far beyond the reach of any tool to date.

In the era of Rich Internet Applications (RIA) the approach, traditionally used, to perform web application testing is flawed. More and more functionality is moving back to the user’s browser (AJAX, HTML5, JS Frameworks). Traditional tools focus on HTTP requests and manipulation, bypassing the client code completely and leave whole portions of the web application untested. Very few tools perform JavaScript/Binary/flash parsing and this layer of the stack is getting ignored. I’m sure this will change but it needs to do more than “keep up” in order to improve matters.
The robustness, coverage and ultimately accuracy of the testing relies on the skill of the consultant that performs the assessment, the tests conducted and the tools used; this leads onto consistency problems…..

Consistency:
We can’t guarantee any two testers will find the same issues (human element) on the same application and also apply the same risk to the discovered issues. I’ve managed teams of over 100 testers’ globally on large engagements and the variance in quality is huge. Scalability does not lend itself well to quality. Coupling the “human element” with variance in skill and experience leads to massive issues regarding a consistent approach to testing applications.

Comedy of Errors
If these weaknesses (above) inconsistent approach and weak tools are amplified by high volumes of testing the issue gets much worse (a comedy of errors). I think we need to remember that application security is a sub domain of an engineering discipline called computer science!! Looking at the current approach and the weaknesses associated I don’t believe what is currently done can be really called science, more like “best endeavours”.
…So our approach is a little flawed to be polite.
 

Industry Growth!= More Secure (well, actually less secure)
Another point to demonstrate the above flaw is the Penetration testing “industry”. It has grown, estimated to have more than doubled globally in the last 10 years. But the problems with internet and web security have only gotten worse.
….Throwing money at the issue is not making much of an impact.

Invisible (and expensive) Deliverable
Our deliverable is invisible. A secure application is not noticed; it works, taken for granted. Its only when security does not work security gets noticed. It is difficult to sell security when there does not seem to be any tangible output. It’s sort of like an insurance policy; you only notice it when you pay for it and when you actually need it. (You might be forced to have it via compliance also).
*Good* skilled penetration testers are expensive due to the fact they are limited in supply and it is hard to “learn” penetration testing, it really comes with experience. There is also a distinct lack of security folks who can write code.
…So the deliverers of the “invisible deliverable” can be hard to justify and can be expensive. 

It’s not working let’s approach this in a different manner:
So like any sane person we need to change strategy to do something that works….or we could keep doing what we are doing (what we are currently doing?)
So if we were to replace the “test security out” activity with something else what would that be? ”I know, let’s build security in”: Secure application development, Training, Code review, Static analysis, SDLC security etc…sounds like a great idea….wow a great idea (Nothing new there...).

Who would have thought that reviewing code (where the vulnerabilities exist) is a good idea and a logical place to prevent security issues!!

So the “build security in” industry has now grown in a rapid manner. We have non profit organisations like OWASP and commercial enterprises alike, all trying to solve the same problems. When I started reviewing source code for security issues in 2002 security code review was akin to waving a dead chicken over the keyboard, it was not so mainstream.

Source code review and static analysis is not particularly new but its adoption rate is still significantly less than penetration testing. Penetration testing will be/is a commodity; everyone is doing it (not all to the same standard). Anyone can be a penetration tester!! Particularly if you hide behind a good brand!
Source code review is different. It requires a better understanding of the technology, the language, associated frameworks coupled with penetration testing knowledge and understanding of risk. But who said this was easy??

A new Approach to solving an old problem
We understand that despite a growing industry the problems in the wild are only getting worse. Time limited approaches due to financial and market pressure coupled with lack of awareness and a vastly varying skill and tool set of the security consultant community certainly does not help.
So let’s propose a novel solution (not really very new but you would not thin it):
·         SDLC Security:
A repeatable structured approach which reflects organisations method of development, the frameworks used and technologies utilised. Structured and repeatable means less error prone and relies less on the skill of the individual (to some extent). It would cover off the following

  • Secure Design: Designed to help ensure the software architecture is appropriate. The appropriate controls are in the correct places within both the client and the server sides of the application.
  • Developer Training: Get the development community involved in secure application development. Raising the bar of code delivered. Remove low hanging fruit.
  • Common Module/Framework design and implementation: Using core common components for various security functions such as canonicalization, input validation, encoding and error handling.
  • Code review: Manual and Static analysis tools: Manual review of the code using a risk based approach focusing on the application perimeter and tracing eh dataflow inwards.
  • Integrated Functional / Security testing /Anti-Functional testing: Negative use cases. Testing aimed at running exception paths in the code.
Fixing discovered issues by virtue of a penetration test is also known as “Bolting on security”. Point fixes of discovered issues which may not have been thought through, may break other issues. 
In the case of a addressing a security issue with a root cause being a design flaw this is even worse as it is generally more expensive to fix and retrofit.

In general, fixing issues, as a result of a penetration test is more expensive and error prone. We should try to build and detect security issues as part of the development and test phases of the life cycle. With finite resources and finance it is best to prevent issues from occurring rather than detecting issues after they occur.

·         High Volume, Consistent, Cost effective, semi-automated penetration testing.
Using a tuned vulnerability scanner, we can understand the coverage, areas of weakness of the scanner, vulnerabilities covered and not covered. It is automated; consistent approach, proven (over time), maintained testing engine and rule-set. After all we need to identify and fix vulnerabilities. The scanner is tuned over time in order to improve efficiency and accuracy.
  • Manual verification of discovered issues; all issues require verification for exploitability and risk rating.
  • Manual business logic and authorisation testing (to some extent); Business logic testing will require manual testing but this can also be integrated into System testing
  • Consistent risk analysis: Assessing the risk of an issue with sufficient business context of the application.
I suppose you may notice that manual penetration testing is not in list above. That is because it has not proven to work. The manual effort is used in the SDLC and in the verification of the runtime scanning.
A Point in time
So once a web application undergoes testing that is in effect a point-in-time test.
Once the application undergoes maintenance, functional change or even cosmetic change in the case of some RIA applications vulnerabilities may be re introduced.

"App Radar":
What is required is a frequent and low bandwidth continuous scan if possible. Such a solution would provide an “App Radar” effect by detecting changes in the application as they happen. Such changes can be used to compare deltas between various points in time as to monitor the organic growth and change of the application over time.

 Root Cause Analysis:
Changes in application behaviour from a security perspective can be traced internally to the source code change control process and hence assisting with root cause discovery and definition.

Near-Immediate testing:
Another benefit of continuous scanning is as new rules and tests are discovered they can be deployed and used to test the application as part of the continuous exercise.

All of this feeds into Enterprise Security Intelligence (ESI)
More about this next time................