Who owns and regulates MY Facebook data?

February 14, 2012 16 comments

I am also the editor of the Neohapsis Labs blog. The following is reprinted with permission from

http://labs.neohapsis.com/

My previous post briefly described the data that makes up a user’s Facebook data and this post will try to shed light on who owns and regulates this data.

I am probably not going out on a limb here to say that the majority of Facebook’s registered users have not read the privacy statement. I was like the majority of users myself, in that I did not fully read Facebook’s privacy statement upon signing up for the service. Facebook created a social media network online, and there were few requirements previously defined for such types of business in America or the world. A lack of rules, combined with users constantly uploading more data, has allowed Facebook to maximize the use of your data and create a behemoth of a social media networking business.

Over time, Facebook has added features to allow users to self regulate their data by limiting others (whether Facebook users or general Internet public) from viewing certain data that one might want to share with only family or specific friends. This provided a user with the sense of ownership and privacy as the creator of the data could block or restrict friends and search providers from viewing their data. Zuckerberg is even quoted by WSJ as saying “The power here is that people have information they don’t want to share with everyone. If you give people very tight control over what information they are sharing or who they are sharing with they will actually share more. One example is that one third of our users share their cell phone number on the site”.

In addition to privacy controls, Facebook gave users more insight into their data through a feature that allowed a user to download ‘all’ their data through a button in the account settings. I placed ‘all’ in quotes because, while you could download your Facebook profile data, this did not include data including wall comments, links, information tagged by other Facebook users or any other data that you created during your Facebook experience. Combined, privacy controls and data export are the main forms of control that Facebook gives to their users for ownership of profile, pictures, notes, links, tags and comment data since Facebook went live in 2004.

So now you might be thinking problem solved; restricting your privacy settings on the viewing of information and downloading ‘all’ your information fixes everything for you. Well, I wish that was the case with Facebook business operations. An open letter by 10 Security professionals to the US Congress highlighted that this was not simply the way things worked with Facebook and third party Facebook developer’s operations. Facebook has reserved the right to change their privacy statement at any time with no notice to the user and Facebook has done this a few times, to an uproar from their user base. As Facebook has grown in popularity and company footprint, security professionals along with media outlets have started publishing security studies painting Facebook in a darker light.

As highlighted by US Congress in December 2011, Facebook was not respecting user’s privacy when sharing information to advertisers or when automatically enabling contradicting privacy settings on new services to their users.  Facebook settled with the US Congress on seven charges of deceiving the user by telling them they could keep their data private.  From my perspective it appears that Facebook is willing to contradict their user’s privacy to suit their best interest for shareholders and business revenue.

In additional privacy mishaps, Facebook was found by an Austrian student to be storing user details even after a user deactivates the service. This started an EU versus Facebook initiative over the Internet that put heat on Facebook to give more details on length of time data was being retained for current and deactivated users.  Holding on to user data is lucrative for Facebook as this allows them to claim more users in selling to advertising subscribers as well as promoting the total user base for private investor bottom lines.

So the next step one might ask is “who regulates my data held by social media companies?” Summed up quickly today, no one outside Facebook is regulating your data and little insight is given to users on this process. The governments of the US, along with the European Union, are looking at means of regulating Facebook’s operations using things such as data privacy regulations and the US/EU Safe Harbor Act.  With Facebook announcing their initial public offering of five billion USD there is soon to be more regulations, at least financially, to hit Facebook in the future.

As an outcome of the December 2011 investigation by the United States Congress, Facebook has agreed to independent audits by third parties, presumably of their choosing. I have not been able to identify details regarding the subject of these audits or ramifications for findings from an audit. Facebook has also updated the public statement and communication to developers and now states that deactivated users will have accounts deleted after 30 days. I have yet to see a change in Facebook’s operations for respecting their user’s privacy settings when pertaining to third parties and other outside entities – in fairness they insist data is not directly shared for advertising; although some British folks may disagree with Facebook claims of advertising privacy.

From an information security perspective, my ‘free’ advice to businesses, developers and end users, do not accesses or give more data than necessary for your user experience as this only brings trouble in the long run. While I would like to give Facebook the benefit of the doubt in their operations, I personally only give data that I am comfortable sharing with the world even though it is limited to friends.  In global business data privacy regulations vary significantly between countries, with regulations come requirements and everyone knows that failing requirements results to fines so business need to think about only access appropriate information and accordingly restricting access.  For the end user, or Facebook’s product, remember that Facebook can change their privacy statement at their leisure and Facebook is ultimately a business with stakeholders that are eager to see quarter after quarter growth.

I hope this post has been insightful to you; please check back soon for my future post on how your Facebook data is being used and the different entities that want to access your data.

Anonymous Releases FBI and UK Conference Call Recording

February 5, 2012 1 comment

I am also the editor of the Neohapsis Labs blog. The following is reprinted with permission from
http://labs.neohapsis.com/

This past Friday (February 3, 2012) Anonymous released a call recording regarding an assumed confidential conference call between two FBI field offices and a official UK investigation office regarding status on Anonymous, AntiSec, Lulzsec and other splinter cyber groups.  It was released on ThePirateBay, YouTube and Pastebin, and  from the Pastebin posts, the conference call appears to have been related to a meeting invite for a call on January 17, 2012 that was sent on January 13, 2012 to nearly 50 people from France, UK, Netherlands, Ireland, Germany, Sweden and hosted by US to coordinate internationally.  The posts were made by anonymous as part of their #FFF (F@$K FBI Friday) releases which has been going on almost regularly for over a year now.

It is unclear through YouTube audio if the call was from January 20, or a more recent conference call between the governments.  I think that this was probably only released because the hacking groups found no more use in that bridge number.  In listening to the call, one can gain insight into the global workings of the fight against cyber crime, as two current cases were lightly discussed. Insight was also sought regarding other persons of interest concerning breaches reported to government authorities.  I found the lack of care around people joining the call interesting as I could hear the extra beep that was missed by the call parties and I assume it was ‘Anonymous’ recording the call.

Few facts can be gathered around how anonymous gained the electronic invitation for the meeting.  After the Pastebin post with conference bridge call number and password, it does not seem that the conference system or software was hacked to gain access for the call.  One might assume that an email account or system on the distribution list could have been compromised to gain the conference details or some form of social engineering was used in the attack.  Either way, anonymous has again provided a reason for government and private industry to rethink their communication processes for distributing sensitive call meetings.  In future calls, I would think that every time the system beeps for a new attendee on the call there will be a stop to ask who had just joined the conference, especially when discussing active investigations or sensitive information.

While there is a need to share passwords for conference calls it is important to mitigate any risk in the process to overcome the shared password.  Typically this is done on conference calls by paying a very close attention to people joining the call and stopping conversation when the system makes a beep for a new participant joining the call.  If the conference call is of sensitive or classified information then the call should be halted or stopped if all parties are not able to be identified on the conference system.

Also, all parties need to read The New York Times article about board rooms being open up to hackers through weak implementation security as it has some relevance here.

Categories: Information Security

Groundhog Day in the Application Security World

February 1, 2012 2 comments

I am also the editor of the Neohapsis Labs blog. The following is reprinted with permission from
http://labs.neohapsis.com/

By Michael Pearce, a Security Consultant and Researcher at Neohapsis

Throughout the US on Groundhog Day, an inordinate amount of media attention will be given to small furry creatures and whether or not they emerge into bright sunlight or cloudy skies. In a tradition that may seem rather topsy-turvy to those not familiar with it, the story says that if the groundhog sees his shadow (indicating the sun is shining), he returns to his hole to sleep for six more weeks and avoid the winter weather that is to come.

Similarly, when a company comes into the world of security and begins to endure the glare of security testing, the shadow of what they find can be enough to send them back into hiding. However, with the right preparation and mindset, businesses can not only withstand the sight of insecurity, they can begin to make meaningful and incremental improvements to ensure that the next time they face the sun the shadow is far less intimidating.

Hundreds or thousands of issues – Why?

It is not uncommon for a Neohapsis consultant to find hundreds of potential issues to sort through when assessing a legacy application or website for the first time. This can be due to a number of reasons, but the most prominent are:

  1. Security tools that are paranoid/badly tuned/misunderstood
  2. Lack of developer security awareness
  3. Threats and technologies have evolved since the application was designed/deployed/developed

Security Tools that are Paranoid/Badly Tuned/Misunderstood

Security testing and auditing tools, by their nature, have to be flexible and able to work in most environments and at various levels of paranoia. Because of this, if they are not configured and interpreted with the specifics of your application in mind they will often find a large number of issues, of which the majority are noise that should be ignored until the more important issues are fixed. If you have a serious, unauthenticated, SQL injection that exposes plain-text credit card and payment details, you probably shouldn’t a moment’s thought stressing about whether your website allows 4 or 5 failed logins before locking an account.

Lack of Developer Security Awareness

Developers are human (at least in my experience!), and have all the usual foibles of humanity. They are affected by business pressures to release first and fix bugs later, with the result that security bugs may be de-prioritized down as “no-one will find that” and so “later” never comes. Developers also are often taught about security as an addition rather than a core concept. For instance, when I was learning programming, I was first taught to construct SQL strings and verbatim webpage output and only much later to use parameterized queries and HTML encoding. As a result, even though I know better, I sometimes find myself falling into bad practices that could introduce SQL injection or cross-site scripting, as the practices that introduce these threats come more naturally to me than the secure equivalents.

Threats and Technologies have Evolved Since the Application was Designed/Deployed/Developed

To make it even harder to manage security, many legacy applications are developed in old technologies which are either unaware of security issues, have no way of dealing with them, or both. For instance, while SQL injection has been known about for around 15 years, and cross-site scripting a little less than that, some are far more recent, such as clickjacking and CSS history stealing.

When an application was developed without awareness of a threat, it is often more vulnerable to it, and when it was built on a technology that was less mature in approaching the threat remediating the issues can be far more difficult. For instance, try remediating SQL injection in a legacy ASP application by changing queries from string concatenation to parameterized queries (ADODB objects aren’t exactly elegant to use!).

Dealing with issues

Once you have found issues, then comes the daunting task of prioritizing, managing, and preventing their reoccurrence. This is the part that can bring the shock, and the part that can require the most care, as this is a task in managing complexity.

The response to issues requires not only looking at what you have found previously, but also what you have to do, and where you want to go. Breaking this down:

  1. Understand the Past – Deal with existing issues
  2. Manage the Present – Remedy old issues, prevent introduction of new issues where possible
  3.  Prepare for the Future – Expect new threats to arise

Understand the Past – Deal with Existing Issues

When dealing with security reports, it is important to always be psychologically and organizationally prepared for what you find. As already discussed, this is often unpleasant and the first reactions can lead to dangerous behaviors such as overreaction (“fire the person responsible”) or disillusionment (“we couldn’t possibly fix all that!”). The initial results may be frightening, but flight is not an option, so you need to fight.

To understand what you have in front of you, and to react appropriately, it is imperative that the person interpreting the results understands the tools used to develop the application; the threats surrounding the application; and the security tool and its results. If your organization is not confident in this ability, consider getting outside help or consultants (such as Neohapsis) in to explain the background and context of your findings.

 Manage the present – Remedy old issues, prevent introduction of new issues where possible

Much like any software bug or defect, once you have an idea of what your overall results mean you should start making sense of them. This can be greatly aided through the use of a system (such as Neohapsis Security Manager) which can take vulnerability data from a large number of sources and track issues across time in a similar way to a bug tracker.

Issues found should then be dealt with in order of the threat they present to your application and organization. We have often observed a tendency to go for the vulnerabilities labeled as “critical” by a tool, irrespective of their meaning in the context of your business and application. A SQL injection bug in your administration interface that is only accessible by trusted users is probably a lot less serious than a logic flaw that allows users to order items and modify the price communicated and charged to zero.

Also, if required, your organization should rapidly institute training and awareness programs so that no more avoidable issues are introduced. This can be aided by integrating security testing into your QA and pre-production testing.

 Prepare for the future – Expect new threats to arise

Nevertheless, even if you do everything right, and even if your developers do not introduce any avoidable vulnerabilities, new issues will probably be found as the threats evolve. To detect these, you need to regularly have security tests performed (both human and automated), keep up with the security state of the technologies in use, and have plans in place to deal with any new issues that are found.

Closing

It is not unusual to find a frightening degree of insecurity when you first bring your applications into the world of security testing, but diving back to hide is not prudent. Utilizing the right experience and tools can turn being afraid of your own shadow into being prepared for the changes to come. After all, if the cloud isn’t on the horizon for your company then you are probably already immersed in it.

What Makes Up Facebook Data?

February 1, 2012 1 comment

I am also the editor of the Neohapsis Labs blog. The following is reprinted with permission from
http://labs.neohapsis.com/

This is the first post in our  Social Networking series.

My guess is that you would not simply give a person that knocked on your front door or approached you in the street most of the data Facebook collects in your profile. Facebook profile data consists of many things, including your birth date, email, physical address, current location, work history, education history and additional information you input for activities, interests and music (interestingly much of this can be used for identity theft…) In addition to your profile data, any installed or authenticated Facebook applications have access to your wall posts and list of friends as well as any other data that is shared with “Everyone”.

As Facebook adds new features, the data included in your face book profile has probably crept to include other data of uploaded pictures, application usage and history, tags in posts or pictures. Facebook will always be looking for ways to collect more of your data as YOU are their product. Your data, data of friends and data of everyone else on Facebook is where Facebook collects their profit and, as with most businesses, profits need to increase through expanding markets and giving access to their product.

The data collected by Facebook on you can also include cookie tracking by Facebook even when you are not explicitly on their website.  Facebook heard much uproar from the user community when a security researcher in September 2011 [link] discovered Facebook was even tracking users that had gone as far as deactivating their accounts! Facebook could then track all web history even through web sites that are not related to Facebook activities in any way.

You do have the ability to limit data on Facebook and make sound decisions on what personal data you do decide to submit to Facebook (friends are another matter). Inherently by using Facebook for the ‘free’ services, you are going to lose some control of your information you share with friends. There are a few important factors that you should think about in dealing with social media and my next post will shine some light on who actually owns and regulates your data within Facebook; stayed tuned and feed back is always welcome.

Keychain Dumper Updated for iOS 5

January 25, 2012 7 comments

I am also the editor of the Neohapsis Labs blog. The following is reprinted with permission from
http://labs.neohapsis.com/

By Patrick Toomey

Direct link to keychaindumper (for those that want to skip the article and get straight to the code)

Last year I pushed a copy of Keychain Dumper to Github to help audit what sensitive information an application stores on an iOS device using Apple’s Keychain service.  I’ve received a few issue submissions on github regarding various issues people have had getting Keychain Dumper to work on iOS 5. I meant to look into it earlier, but I was not able to dedicate any time until this week. Besides a small update to the Makefile to make it compatible with the latest SDK, the core issue seemed to have something to do with code signing.

Using the directions I published last year the high level steps to dump keychain passwords were:

  1. Compile
  2. Push keychain_dumper to iOS device
  3. Use keychain_dumper to export all the required entitlements
  4. Use ldid to sign these entitlements into keychain_dumper
  5. Rerun keychain_dumper to dump all accessible keychain items
Steps 1-3 continue to work fine on iOS 5.  It is step 4 that breaks, and it just so happens that this step was the one step I completely took for granted, as I had never looked into how ldid works.  Originally, signing the entitlements into keychain_dumper was as simple as:
./keychain_dumper -e > /var/tmp/entitlements.xml

ldid -S/var/tmp/entitlements.xml keychain_dumper

However, on iOS 5 you get the following error when running ldid:

codesign_allocate: object: keychain_dumper1 malformed object

(unknown load command 8)

util/ldid.cpp(582): _assert(78:WEXITSTATUS(status) == 0)

Looking around for the code to ldid, I found that ldid actually doesn’t do much of the heavy lifting with regard to code signing, as ldid simply execs out to another command, codesign_allocate (the allocate variable below):

execlp(allocate, allocate, "-i", path, "-a", arch, ssize, "-o", temp, NULL);

The Cydia code for codesign_allocate looks to be based off of the odcctools project that was once open-source from Apple.  I am unclear, but it appears as though this codebase was eventually made closed-source by Apple, as the  code does not appear to have been updated anytime recently.  Digging into the code for codesign_allocate, the error above:

unknown load command 8

makes much more sense, as codesign_allocate parses each of the Mach-O headers, including each of the load commands that are part of Mach-O file structure.  It appears that load command 8 must have been added sometime between when I first released Keychain Dumper and now, as the version of codesign_allocate that comes with Cydia does not support parsing this header.  This header is responsible for determining the minimum required version of iOS for the application.  If someone knows a compile time flag to prevent this (and possibly other) unsupported header(s) from being used let me know and I’ll update the Makefile.  The other options to get the tool working again were to either update odcctools to work with the new Mach-O structure and/or figure out an alternative way of signing applications.

Historically there have been three ways to create pseudo-signatures for Cydia based applications (you can see them here).  The third uses sysctl, and is no longer valid, as Apple made some changes that make the relevant configuration entries read-only  The second option uses ldid, and was the approach I used originally.  The first uses the tools that come with OS X to create a self-signed certificate and use the command line development tools to sign your jailbroken iOS application with the self-signed certificate.  It appears as though the tools provided by Apple are basically the updated versions to the odcctools project referenced earlier.  The same codesign_allocate tool exists, and looks to be more up to date, with support for parsing all the relevant headers.  I decided to leverage the integrated tools, as it seems the best way to ensure compatibility going forward.  Using Apple’s tools I was able to sign the necessary entitlements into keychain_dumper and dump all the relevant Keychain items as before.  The steps for getting this to work are as follows:

  1. Open up the Keychain Access app located in /Applications/Utilties/Keychain Access
  2. From the application menu open Keychain Access -> Certificate Assistant -> Create a Certificate
  3. Enter a name for the certificate, and make note of this name, as you will need it later when you sign Keychain Dumper.  Make sure the Identity Type is “Self Signed Root” and the Certificate Type is “Code Signing”.  You don’t need to check the “Let me override defaults” unless you want to change other properties on the certificate (name, email, etc).
  4. Compile Keychain Dumper as instructed in the original blog post(condensed below):
ln -s /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.0.sdk/ sdk

ln -s /Developer/Platforms/iPhoneOS.platform/Developer toolchain

make
  1. scp the resulting keychain_dumper to your phone (any file system reference made here will be in /tmp)
  2. Dump the entitlements from the phone
./keychain_dumper -e > /var/tmp/entitlements.xml
  1. Copy the entitlements.xml file back to your development machine (or just cat the contents and copy/paste)
  2. Sign the application with the entitlements and certificate generated earlier (you must select “Allow” when prompted to allow the code signing tool to access the private key of the certificate we generated)
codesign -fs "Test Cert 1" --entitlements entitlements.xml keychain_dumper
  1. scp the resulting keychain_dumper to your phone (you can remove the original copy we uploaded in step 5)
  2. you should now be able to dump Keychain items as before
./keychain_dumper
  1. If all of the above worked you will see numerous entries that look similar to the following:
Service: Vendor1_Here

Account: remote

Entitlement Group: R96HGCUQ8V.*

Label: Generic

Field: data

Keychain Data: SenSiTive_PassWorD_Here

Now, the above directions work fine, but after dumping the passwords something caught my eye.  Notice the asterisk in the above entitlement group.  The Keychain system restricts access to individual entries according to the “Entitlement Group”, which is why we first dumped all of the entitlement groups used by applications on the phone and then signed those entitlements into the keychain_dumper binary.  I thought that maybe the asterisk in the above entitlement group had some sort of wildcarding properties.  So, as a proof of concept, I created a new entitlements.xml file that contains a single entitlement

<?xml version="1.0" encoding="UTF-8"?>

<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"    "http://www.apple.com/DTDs/PropertyList-1.0.dtd">

<plist version="1.0">

 <dict>

     <key>keychain-access-groups</key>

     <array>

         <string>*</string>

     </array>

 </dict>

</plist>

The above file lists a single entitlement defined by “*”, which if the assumption holds, might wildcard all Entitlement Groups.  Using the above file, I recompiled Keychain Dumper and proceeded to resign the application using the wildcard entitlements.xml file.  Surprise….it works, and simplifies the workflow.  I have added this entitlements.xml file to the Git repository.  Instead of first needing to upload keychain_dumper to the phone, dump entitlements, and then sign the app, you can simply sign the application using this entitlements.xml file.  Moreover, the binary that is checked in to Git is already signed using the wildcard entitlement and should work with any jailbroken iOS device (running iOS 5), as there is no need to customize the binary for each user’s device/entitlements.

The consolidated directions can be found on the Github repository page, found here.   A direct link to the binary, that should hopeful work on any jailbroken iOS 5 device can be found here.

Categories: Information Security

Facebook Applications Have Nagging Vulnerabilities

January 3, 2012 3 comments

I am also the editor of the Neohapsis Labs blog. The following is reprinted with permission from
http://labs.neohapsis.com/

By Neohapsis Researchers Andy Hoernecke and Scott Behrens

This is the second post in our Social Networking series. (Read the first one here.)

As Facebook’s application platform has become more popular, the composition of applications has evolved. While early applications seemed to focus on either social gaming or extending the capabilities of Facebook, now Facebook is being utilized as a platform by major companies to foster interaction with their customers in a variety forms such as sweepstakes, promotions, shopping, and more.

And why not?  We’ve all heard the numbers: Facebook has 800 million active users, 50% of whom log on everyday. On average, more than 20 million Facebook applications are installed by users every day, while more than 7 million applications and websites remain integrated with Facebook. (1)  Additionally, Facebook is seen as a treasure trove of valuable data accessible to anyone who can get enough “Likes” on their page or application.

As corporate investments in social applications have grown, Neohapsis Labs researchers have been requested to help clients assess these applications and help determine what type of risk exposure their release may pose. We took a sample of the applications we have assessed and pulled together some interesting trends. For context, most of these applications are very small in size (2-4 dynamic pages.)  The functionality contained in these applications ranged from simple sweepstakes entry forms and contests with content submission (photos, essays, videos, etc.) to gaming and shopping applications.

From our sample, we found that on average the applications assessed had vulnerabilities in 2.5 vulnerability classes (e.g. Cross Site Scripting or SQL Injection,) and none of the applications were completely free of vulnerabilities. Given the attack surface of these applications is so small, this is a somewhat surprising statistic.

The most commonly identified findings in our sample group of applications included Cross-Site Scripting, Insufficient Transport Layer Protection, and Insecure File Upload vulnerabilities. Each of these vulnerabilities classes will be discussed below, along with how the social networking aspect of the applications affects their potential impact.

Facebook applications suffer the most from Cross-Site Scripting. This type of vulnerability was identified on 46% of the applications sampled.  This is not surprising, since this age old problem still creeps up into many corporate and personal applications today.  An application discovered to be vulnerable to XSS could be used to attempt browser based exploits or to steal session cookies (but only in the context of the application’s domain.)

These types of applications are generally framed inline [inling framing, or iframing, is a common HTML technique for framing media content] on a Facebook page from the developer’s own servers/domain. This alleviates some of the risk to the user’s Facebook account since the JavaScript can’t access Facebook’s session cookies.  And even if it could, Facebook does use HttpOnly flags to prevent JavaScript from accessing session cookies values.  But, we have found that companies have a tendency to utilize the same domain name repeatedly for these applications since generally the real URL is never really visible to the end user. This means that if one application has a XSS vulnerability, it could present a risk to any other applications hosted at the same domain.

When third-party developers enter the picture all this becomes even more of a concern, since two clients’ applications may be sharing the same domain and thus be in some ways reliant on the security of the other client’s application.

The second most commonly identified vulnerability, affecting 37% of the sample, was Insufficient Transport Layer Protection While it is a common myth that conducting a man-in-the-middle attack against cleartext protocols is impossibly difficult, the truth is it’s relatively simple.  Tools such as Firesheep aid in this process, allowing an attacker to create custom JavaScript handlers to capture and replay the right session cookies.  About an hour after downloading Firesheep and looking at examples, we wrote a custom handler for an application that was being assessed that only used SSL when submitting login information.   On an unprotected WIFI network, as soon as the application sent any information over HTTP we had valid session cookies, which were easily replayed to compromise that victim’s session.

Once again, the impact of this finding really depends on the functionality of the application, but the wide variety of applications on Facebook does provide a interesting and varied landscape for the attacker to choose from.  We only flagged this vulnerability under specific circumstance where either the application cookies were somehow important (for example being used to identify a logged in session) or the application included functionality where sensitive data (such as PII or credit card data) was transmitted.

The third most commonly identified finding was Insecure File Upload. To us, this was surprising, since it’s generally not considered to be one of the most commonly identified vulnerabilities across all web applications. Nevertheless 27% of our sample included this type of vulnerability. We attribute its identification rate to the prevalence of social applications that include some type of file upload functionality (to share an avatar, photo, document, movie, etc.)

We found that many of the applications we assessed have their file upload functionality implemented in an insecure way.  Most of the applications did not check content type headers or even file extensions.  Although none of the vulnerabilities discovered led to command injection flaws, almost every vulnerability exploited allowed the attacker to upload JavaScript, HTML or other potentially malicious files such as PDF and executables.  Depending on the domain name affected by this vulnerability, this flaw would aid in the attacker’s social engineering effort as the attacker now has malicious files on a trusted domain.

Our assessment also identified a wide range of other types of vulnerabilities. For example, we found several of these applications to be utilizing publicly available admin interfaces with guessable credentials. Furthermore, at least one of the admin interfaces was riddled with stored XSS vulnerabilities. Sever configurations were also a frequent problem with unnecessary exposed services and insecure configuration being repeatedly identified.

Finally, we also found that many of these web applications had some interesting issues that are generally unlikely to affect a standard web application. For example, social applications with a contest component may need to worry about the integrity of the contest. If it is possible for a malicious user to game the contest (for example by cheating at a social game and placing a fake high score) this could reflect badly on the application, the contest, and the sponsoring brand.

Even though development of applications integrated with Facebook and other social network sites in increasing, we’ve found companies still tend to handle these outside of their normal security processes. It is important to realize that these applications can present a risk and should be thoroughly examined just like traditional stand alone web applications.

Becoming a Thought Leader in 2012 – Now you can do it too

December 21, 2011 Leave a comment

Being a thought leader is a really hard job. I’ve been doing it for so long it’s second nature.  But for those of you who wish to know the secrets of thought leadership, check out this video by Chris Eng, and maybe you can become as cool as me.

 

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.