Saturday, December 29, 2007

Logout via Javascript with OnBeforeUnload

One sure fire way to protect users from CSRF attacks is to minimize the window of time that a user is logged in. Current CSRF mitigation strategies focus around the topic of adding a token to each form and link in addition to timing out the session after the the user has been inactive for a relatively short window of time.

However, any third party site than can exploit a weakness in Single Origin Policy can break though these defenses (such as the iframe SOP hack we saw in the past). In addition, the web world is moving to technologies that allow cross-site-requests on purpose, both through Flash, JavaScript and other technologies for mash-up capability.

Not all users are kind enough to explicitly press the logout link or button when they are done using your site. There are three situations that we can trap via JavaScript and force the user to logout without requiring additional action of the part of the user.

1) The user simply types in or browses to a new url in a single-tabbed environment without explicitly logging out.
2) The user closes the tab or window without choosing to press the logout link or button.
3) The user browses to a new tab while staying logged in the previous tab.

The following code sample will allow a programmer to trap events 1) and 2) reliably in IE 6/7 and Firefox 2. It's trivial to fire off the logout event especially if your logout server code will allow a GET request.
<body onbeforeunload="dothis();">

function dothis() {
alert('logmeout ajax event');

The third situation, when a user changes a browser tab, is much more difficult of an event to trap since it does not fire a onbeforeunload or similar event. It may also harm the user experience. Browser tab changing may not be a situation where the user actually wants to log out. None the less, to accomplish this task, you will need to work with the windows' onblur event. However, this event is very chatty. Just changing a tab will fire onblur event 5 times in Firefox 2.0. You
can play with code such as:

var logout = false;
function dothis() {
if (logout == false) {
alert('logmeout ajax event');
logout = true;

But Firefox 2 will still fire the alert 2x. You will need to test and expand upon this code for each unique browser.

Logging out via JavaScript is by no means a complete CSRF mitigation, but is an excellent defense-in-depth measure to add to your current mitigation strategy.

Monday, December 24, 2007

12 Steps To Application Security

There are several holdouts in the industry who wish to trump the term "Application Security" with the term "Software Security." My Christmas wish is that we standardize on the term "Application Security" - because I think it's a more realistic term to describe the state of the industry that helps organizations design, develop, deploy, assess, maintain, retire and build procedures around Applications in a way that protects them from external and internal threats.

1) We must admit that we have a problem and that the security posture of our enterprise Applications are becoming unmanageable.

Let me start by saying that most code is insecure. This is not necessarily getting better, but seems to be getting worse.

Those of us who are professional programmers were never taught to write secure code in school. Even Michael Howard, who is hiring the best and brightest out of the worlds top universities, clearly says that these star graduates have no idea now to write a secure application.

2) We must believe that a power greater than ourselves (Application Security Service Vendors) can help us restore sanity.

It would be effective to re-write all of the worlds applications in an environment that embraces best-of-breed Application Security methodologies. But the truth is that most CIO's have several thousand applications under their responsibility, that are largely insecure. It's already built. It's already in production. We low level programmers are tasked with writing more code, faster and faster, to keep the business moving, since they depend on our work more every day. We simply do not have the luxury of time or budget to rewrite all of those applications. So we are stuck with a state of having to secure applications after the fact. This is a reality check to those in the industry who conjecture that "Software Security" is a better term because "Application Security" implies protection of software after it is built.

3) We must make a decision to turn our will and our lives over to Application Security excellence.

I also feel that the term "Software Security" is a dangerous position that both polarizes the industry and blames the coder. Software Security implies Lines Of Code. Although at the end of the day, individual lines of code need to be written using best practices (input validation, output encoding, proper access control, etc,etc,etc) it's only a small part of the entire picture. Individual coders cannot solve the problem alone.

4) We must make a fearless inventory of the security posture of our current applications.

We cannot just run Fortify, Spi, Cenzic and Watchfire and be secure. We cannot prove that an application is secure by any predicate mathematical proof. So what do we do? We (at times) slap up a WAF to stop the bleeding. We bring in pen testers, conduct code reviews and run tools for the most critical apps.

5) We must a
dmit to a higher power (our CIO), to ourselves and to other coders the exact nature of our wrongs.

We wage political warfare in our organizations to ensure that the "C" level, the project managers, the infrastructure teams, the architects and the low-level programmers are all on the same page about Application Security. Not to mention incident response. Legal issues. Risk analysis - which really has nothing to do with software - but measures a financial impact on a business.

We must be entirely ready to be re-trained to remove all these defects of how we develop applications.

We re-train software engineers as quickly as possible. We start growing a dedicated internal AppSec team to conduct these reviews in-house in a more cost effective way.

We must humbly ask our Vendor to help us remove our shortcomings.

There are so many activities around securing an application that does not involve lines of code - and does NOT involve software - that is seems myopic to me to use the term "Software Security".

8) We must make a list of all applications that are insecure, and become willing to make amends to them all.

No tool will answer the question of the state of our Application Security posture. It takes a village - and often several villages - to even achieve measurement of our current posture! Most CIO's have "no clue" where they are today in terms of Application Security exellence.

9) We make direct amends to our insecure Applications wherever possible by fixing the underlying code, except when it would harm the organization by spending to much to do so.

It is not cost effective to spend 100$ to re-code an application that protects 10$ worth of data. We need outside help to do proper risk analysis - and that measurement needs to be a combination of not just engineering but also non-technical business expertise which has little to do with Software.

10) We continue to take inventory of the security posture of our applications and when we are wrong we promptly admit and fix it.

Depending on a vendor alone will not set you free. The best of breed vendors encourage building AppSec teams internally - the best Vendors help accelerate your organization to achieving Application Security Independence. Continuing education is a great deal cheaper than re-education. Internal penTest expertise is a great deal cheaper than bringing in a service vendor. Using the right tools effectively is a great deal more cost effective than the shotgun approach of using whatever tool was sold to your CIO. The right Vendor will help you get there fast without disrupting the organization.

11) Through continued education and studying of industry best practices, we try to embrace that philosophy in all of our day to day engineering activities.

Once we have the knowledge, we must start building all applications with security in mind and practice from the first few days of each applications conceptual birth.

12) Having had an awakening as a result of these steps, we carry this message to other engineers, and practice these principles in all our affairs as we build new applications.

Software implies the programs that run a computer.

Application implies a solution to a problem - in the enterprise we are talking about delivering data securely.

And I think those of us who use the term "Application Security" do so because it is not the software that we are trying to fix - it's the solution to a business need that we are trying to make more robust.

Thursday, December 20, 2007

Hash Migration Strategies

I've had several engineers ask me recently about how to migrate very large number of users from an old non-salted md5 hash to SHA-512.

I can think of 2 main strategies:

1) Rolling migration: Weaker security, stronger user experience.
a) Add a new database column to your USER table that will hold the 1024 bits necessary for SHA-512.
b) Every time a user logs in, first check to see if the SHA-512 column is empty.
c) If empty, just verify the password though the old md5 hash. If that login is successful, rehash to SHA-512 and delete the md5 column.
d) If the md5 column is not empty, verify the password via SHA-512 (preferably with per-user salts and multiple iterations of the hash)

2) Mass migration: Stronger Security, weaker user experience.
a) Email users (in blocks of 10,000) that their password will be expiring soon.
b) At login time, do the same as a rolling migration except also force the user to change their password upon successful login.
c) If a user does not change their password within a limited amount of time, lock their account and force a customer service interaction in order to re-open the account - giving that user 1 hour to change their password or be locked out again.

Saturday, December 8, 2007

Input Validaton Rant

When should we do input validation in J2EE applications?

I can think of 3 scenarios all with their own trade-offs.

1) "Let's just skip validation inside the application, and apply a few J2EE filters before we deploy. "

This is the solution I've been forced down in the past. I'm not a fan. It's not fair to be in a situation where the coder has the responsibility, but not so much the power. J2EE filters, while still being Java code, are external to the core app. I think of J2EE filters as part of the configuration layer; not integrated deep into the app itself.

Now, there are occasions where adding a filter (such as Eric Sheridan's CSRFGuard) is completely external to the app. The programmer never even needs to think about this kind of vulnerability if CSRFGuard is deployed. However, validation of a form element to ensure that it's a proper email address really seems like programmer responsibility to me. But adding a configuration filter like CSRFGuard to modify all forms by adding form keys really does not seem like programmer responsibility to be, but the platforms responsibility. When are we going to see work like CSRFGuard and the OWASP ESAPI project integrated deeper into J2EE, Sun?!

2) "Let's just start using Struts XML ActionForm configuration, have programmers completely skip doing any kind of validation, and have a AppSec regex professional work with our architect to set up configuration."

This has significant benefits, and I'm a fan of this methodology for big teams. But do not be lulled into a false sense to security just because you might have your input validation dialed in. Strong input validation does not protect you from security design flaws and a host of other attack vectors. But still, Struts input validation configuration at the XML level can be very powerful if done completely across the entire app. (each and every form element.) But you better have some serious regEx experience in-house, and have a regEx expert who is very much willing to take the time to learn the application as deeply as the folks who wrote it.

3) Let's do white list validation inside our controllers' dispatchers the moment we get data from the request.

This is my favorite, because I'm a manicoder.

With the exception of Dinis Cruz, everyone in the industry is blaming the coders. (Thank you, Dinis) Yes, we are often the scapegoat (baaaaaaaaah!) being asked to write code faster, cram more functionality in, and get it done before some arbitrary date passes. And we have wonderful people like Alan Paller "expressing frustration with the fact that everything on the [SANS Institute Top 20 Internet Security] vulnerability list is a result of poor coding, testing and sloppy software engineering."

Thanks Alan; but when are executives like you going to really invest the time, energy, money, training, Q/A resource and longer development cycles to truly allow us manicoders to engineer secure applications? Blaming the coder is an easy way out; Application Security policy, money and time needs to come from the top down. And this is a very tough sell when all you get out of it is insurance and assurance that is still very difficult to mathematically prove correct. If you have programmers in you org who are writing insecure code, I conjecture that we need to look at the "C-level" and see how much they truly care about this topic and take note if they are willing to commit to the cost and time necessary to win the battle of secure code.

We can't just blame the likes of Alan, even Gartner is telling the "C-level" that "developers need to take more responsibility",1000000189,39291194,00.htm thereby taking responsibility off the hands of the C's. Again, so unfair, when even Michael Howard at Microsoft with an almost unlimited hiring budget says that even the best and the brightest minds coming out of college have "no idea" now to write secure applications.,289202,sid92_gci1283745,00.html?track=sy280&asrc=RSS_RSS-25_280

Let's kick it up another notch.

Right now, coders with security awareness are the "high priests" of software engineering groups. It does not have to be this way, but that is the truth in most organizations. AppSec knowledge is not integrated well into most organizations yet. And sadly, those coders who do have solid AppSec awareness and ability need to apply best-practice security guidelines **IN OPPOSITION TO UPPER MANAGEMENTS DESIRE TO DEPLOY CODE FAST**

If you really want to put the responsibility of AppSec into the hand of me, the coder, than we cannot depend on external configuration to lock down our apps. If you really want me to add IDS type logging deep within the bowels of my code, then you need to both empower me with training, tools and time to do so. This AppSec squeeze-play from the C-level needs to end.

Ok, back to input validation. I want control over my application at the absolute soonest possible situation when user input enters my code. I want to make sure strong whitelist validation is applied at the earliest point of entry into my code. I want to empower an auditor to easily dig through my code, look for every situation where we do request.getParemter and the like, and see whitelist validation applied right there and then, without having to dig through 10 other files or some elaborate platform technology to ensure proper validation is being done.

Thanks kindly for reading this far. For more information, contact Aspect Security for all of your appSec training, assurance and acceleration needs! :)

Monday, October 29, 2007

Deeper CSRF Protection

It's almost impossible to truly protect against stored CSRF found on a secondary/malicious website, not to mention browser Trojans that we see in the banking industry on a regular basis. It's non trivial to protect against these problems, but here is one potential solution to deeply secure this frightful attack vector:

a) Implement form keys defense on all forms where both the key name and value is a strong random session id. (Current standard defense)
b) At time of login, inject another per-link session ID to all URL's of that page. No request can even request a copy of a form without the correct url-level session id.
c) If some code is trying to request a page/form with the wrong session id, explain to the user of the attack and log them off their session immediately.
d) Any time a new page is returned to the user, create new per-link session ids for all additional links on page.

This defense strategy would still work in a multi-tabbed enviornment. The key differentiator is that a potential attack from malicious CSRF would be detected and drop the users session immediately since there is an obvious compromise or poor surfing habits.

Thursday, October 18, 2007

I want Cake, but please make it Light (as in Lightppd)

What a great post from Brendon Crawford (working on the woefully insecure PHP language) showing how to get CakePHP 1.1x running on Lighttpd. The woosies at CakePHP rejected his excellent patches for "IP reasons", so sad.

Anyhow, here we go!

Saturday, October 13, 2007

Java Snob Laughter

Yes, I'm a serious Java snob who has spent way to much time working with PHP. I've tried hard to artfully describe my disdain for PHP, and I would like to thank the people at for helping describe my feelings in my favorite art medium: inspirational posters! :)

Thursday, October 4, 2007

Reflective XSS protection, output encoding

UPDATE: The best XSS defense strategy is described here:


Thanks to Eric Sheridan over at OWASP for fielding our "battle of the output encoding method for reflective XSS Protection" competition today! All commentary below is from Eric via email on 10/4/07.

>>1) Output encoding try 1 (Jim)


Although it is not frequently mentioned, URL encoding will prevent reflected XSS attacks. The browser will not interpret URL encoded values. It looks as though this approach is sufficient for this particular instance. However, I'd recommend you use HTML entity encoding instead. Aside from addressing XSS, entity encoding will fix that 'ugliness' problem that you mentioned.

>>2) Output encoding try 2 (Brendon)

badChars = [ "<", ">", "#", "&", "'", "\"" , "%", "\\" ];
entities = [ "<", ">", "&", "'", "*", "%",
"\" ];

word = "some bad xss phrase goes here";
out = "";
i = 0;
while(i < ordinal =" toAscii(word{i});" killbadchar =" false;" j =" 0;" ordinal ="="="" killbadchar =" entities[j];"> 126) {
out .= " ";
else {
out .= word{i};

print( out );

Eck, rough looking pseudo-code :)

If I were doing a security review and I saw some code like this used to prevent XSS, I would mark it as a finding (albeit low, for the moment). This is a 'negative' or 'blacklist' approach - the developer is rejecting known 'bad' characters rather than accepting known 'good' characters. Guys like RSnake ( spent their entire career bypassing such blacklist filters. Don't get me wrong, this method will prove effective in a lot of scenarios. Unfortunately, there are going to be special cases where this particular method fails. Consider the case when user supplied data lands within a JavaScript tag. Example:

<script language="JavaScript">
var a = ;

In this particular example, the proof-of-concept would look like "a; alert(document.cookie); var b=" (without the quotes). A real attack vector would have to do quite a bit of obfuscation, but a determined individual will find a way (see 'Myspace Worm').

If you are looking for a good output encoding example, check out

This method follows a 'positive security model'. It only accepts the known good values and entity-encodes all of the rest. I think the method is so simple that it can be easily ported to any language. I'd recommend you use this method in place of the two output encoding attempts listed below. Also, if your validation routines detect someone trying to enter malicious javascript, I'd highly consider logging the event as a "security event". Hope this helps!


Tuesday, October 2, 2007

Secure Coding Smartie

"Product teams don't get better by reading secure coding standards. They get better by working with security testers, seeing how their code gets broken by attackers, and learning from the experience. Before we expect software companies to ship better products, we need to see a top-down commitment to security, just like we saw at Microsoft. Everyone from the board room down to the QA team needs to agree that security trumps feature sets and release schedules."

Thomas Ptacek, principal with Matasano Security.

Who would have predicted that Microsoft would become the poster-child for secure application development practices?

JavaScript debugging in IE 6/7

Thank you Brendon Crawford for this excellent summary:

After thoroughly testing and trying about 9 or 10 different products, I have come up with the definitive must have list for debugging and developing in IE. These are all free BTW:

1) CSS/HTML inspecting - Microsoft Internet Explorer Developer Toolbar (Free)
2) AJAX/HTTP Inspecting - Fiddler (Free)
3) Javascript debugging - Microsoft Office XP Script Editor (Free if you have Office)

And here are the tools to avoid (too costly, difficult to use, lacking features, lacking stability, or unnecesarily complicated)

1) Microsoft Ajax View
2) Firebug Lite
3) CSSVista
4) DebugBar
5) DebugBar/CompanionJS
6) Microsoft Script Debugger
7) IE Watch
8) DocMon
9) IE WebDeveloper V2

Monday, October 1, 2007

IED and WebAppSecurity

When reading this article about how the US Military is struggling to defeat IED's - I could not help but think of this topic parallels to how difficult of a time we are having in terms of Web App Security.

Thursday, September 27, 2007

Java Applet Security?

With so many Java applets vulnerabilities, it's tough not to poke at Java applet security. But what about the real world? Are we seeing any real attacks against enterprise applets? And how good is applet security when compared to ajax/javascript web sites that we see today?

"I'm always surprised how far people will go to ding Sun/Java security, when there are so many other targets that are so much worse it's not even really the same thing." - Jeff Williams

Well here's one for the applet side...

Tuesday, September 4, 2007

Web Development Time Breakdown

What a brilliant piece of web-development wisdom! This one made me laugh out loud...

Saturday, August 18, 2007

Web Application Security Scanners

Jeff Williams over at OWASP (Chairman) / Aspect Security (CEO) posted a very insightful monologue about the State of Web Application Security Scanners to several of the OWASP eLists, and I thought it was so crucial to those of us who care about Web App Security that I placed a copy at

The takeaway from this is that you just cannot buy a web app scanner from one of the big three (spi, cenzic, watchfire) and use that as the foundation to your application security process. Web app security scanners do not pick up a large class of errors including business logic, access control and deeper application security problems that are not easily exposed from the endpoints. For that you need manual review by an expert, and architectural review by an expert.

Security Awareness

It's my belief that you cannot write a secure application without security awareness deeply rooted within the minds, souls and software development life-cycle practices of your software developers.

If you are trying to go from a developer team that contains no awareness to total developer security awareness and practices, the cost is prohibitive. But if security awareness training for developers becomes a regular part of your software development life cycle, the cost to train goes down dramatically over time. Continuing education is cheaper than full blown re-training.

- Jim