Skip to main content

OAUTH and Javascript

I am surprised to see the careless implementation of OAUTH by almost all providers like Google, Facebook. I am pretty much sure that I might not be the only person who would have noticed it. By this time there could be atleast thousands of botnets which impersonate as a regular site and spam users' walls or create a social network graph as good as facebook. Probably there will be a separate Real Time Bidding auction by the impersonators. In short OAUTH+Javascript is like locking your door and leaving the key under your doormat.

Lets discuss about the differences between the client based OAUTH flow and server based OAUTH flow
  • As per Google's documentation, the server loads the page on client(browser) with the app id(public name). 
  • On initiating OAUTH with Google servers, the appid and redirection_uri is passed. The Google server calls the redirection_uri with a code.
  • The app's server has to cross check the code with client_secret to get the access_token which has the required permissions.
This flow seems foolproof as the trusted authority gets the permission. In a client only app, client_secret is impossible to be used. So that part is removed from the flow. So the flow is,
  • The server loads the page on client(browser) with the app id(public name). 
  • On initiating OAUTH with Google servers, the appid and redirection_uri is passed. The Google server calls the redirection_uri with the access_token if the referrer and app-id matches.
Thats it now this access_token can be used to post on walls or get profile information. Same is the case with facebook.

Lets discuss how this Javascript feature affects the end users and the third party app developers.

End Users:
This has opened a huge door for phishing on end users. A virus could change etc/hosts or a compromised machine can do a man in the middle attack and direct an end user to a fake website. This website will have same client_id as the original website. Now they can get all accesses the original website(trusted) is supposed to get. Now they are free to spam on the end user's wall if the original app has requested for grant of that permission. They can even access the whole profile data of the user.
Some examples

This site impersonating as digg.com and gets all my profile details.  It actually runs on my local web server


The second one shows  a wall post by the fake Quora App 

As an end user its better to trust OAUTH only on sites which support HTTPS, as it verifies the authenticity of the server we are talking to. Though there are other ways an app can make itself secure from such compromises, but they are difficult for a naive user to test. Don't use OAUTH if the app is not HTTPS. So using OAUTH in Quora(in the above case, I neglected the HTTPS warning given by my browser) is much safer than using OAUTH in digg.

App Developer:
App Developer should make sure that their app is HTTPS so that the users can have a trust that they are talking to the actual person. Facebook has provided some options in their Advanced Configurations like
  1. Client OAuth Login
  2. Embedded browser OAuth Login
  3. App Secret Proof for Server API calls
Setting options for these features appropriately can make the app secure. But as far as I see, even apps from Quora, digg, flipkart have not set these features properly. The above options disable client based OAUTH completely.
So the solution to the problem is to use only HTTPS if you use client based OAUTH or set the above features properly so that javascript based OAUTH is disabled.
OAUTH Provider:
OAUTH providers should understand the sensitivity of the permissions they are going to give to the app and make sure the app is not a spoofed one. Using Client_Secret is the only way to make this possible. But Facebook and Google have brushed off their responsibilities on contacting them, stating its the responsibility of app developer to make things secure
Quoting Google
"I forwarded your report to the engineers working on OAuth, but as you noticed, this is not really a vulnerability but rather a consequence of how OAuth is designed and the fact that the Web has not fully moved to HTTPS."
Quoting Facebook
"Thanks for writing in. We provide application developers with the opportunity to specify a whitelist of valid URIs for redirection on OAuth. If an application does not specify such a whitelist, we allow the request to be sent to any domain that the application has authorized, which also means the request can be made to the HTTP version of the site. We allow redirects to HTTP in general because not all applications on our platform have full support for SSL. The behavior you're describing is caused by the configuration of individual applications"
But OAUTH as a protocol is secure with client_secret. It has been compromised as a trade off for javascript implementation. Security should not be compromised for comfort.




Comments

Popular posts from this blog

How we have systematically improved the roads our packets travel to help data imports and exports flourish

This blog post is an account of how we have toiled over the years to improve the throughput of our interDC tunnels. I joined this company around 2012. We were scaling aggressively then. We quickly expanded to 4 DCs with a mixture of AWS and colocation. Our primary DC is connected to all these new DCs via IPSEC tunnels established from SRX. The SRX model we had, had an IPSEC throughput of 350Mbps. Around December 2015 we saturated the SRX. Buying SRX was an option on the table. Buying one with 2Gbps throughput would have cut the story short. The tech team didn't see it happening. I don't have an answer to the question, "Is it worth spending time in solving a problem if a solution is already available out of box?" This project helped us in improving our critical thinking and in experiencing the theoretical network fundamentals on live traffic, but also caused us quite a bit of fatigue due to management overhead. Cutting short the philosophy, lets jump to the story.

LXC and Host Crashes

 We had set up a bunch of lxc containers on two servers each with 16 core CPUs and 64 GB RAM(for reliability and loadbalancing). Both the servers are on same vlan. The servers need to have atleast one of their network interface in promiscuous mode so that it forwards all packets on vlan to the bridge( http://blogs.eskratch.com/2012/10/create-your-own-vms-i.html ) which takes care of the routing to containers. If the packets are not addressed to the containers, the bridge drops the packet. Having this setup, we moved all our platform maintenance services to these containers. They are fault tolerant as we used two host machines where each host machine has a replica of the containers on the other. The probability to crash for both the servers at the same time due to some hardware/software failure is less. But to my surprise both the servers are crashing exactly the same time with a mean life time 20 days. We had to wake up late nights(early mornings) to fix stuffs that gone down The

The FB outage

 This outage has caused considerable noise everywhere. It was quite discomforting for me because during the whole conversation nobody bothered to understand the gravity of the issue. I don't expect end users to understand the issue. But this is going to be a blogpost for all of those in the tech field, Such an event can happen how much ever chaos engineering, best of the tech jargon we implement in the stack To all my Site Reliability Engineer friends, Site Up is our first priority. I myself said many a times outage is news and SREs should prevent outage. But I'm afraid this is leading to a cult in the industry who despises outages and takes no learnings from it. I don't know what has happened in Facebook. I can explain a scenario which may or may not be right but that can definitely show the gravity of the issue. Let's draw a probable Facebook architecture Disclaimer I don't work at Facebook. So this might not be how facebook routes traffic. This is based on my exp