Open source helps Facebook achieve massive app scalability

People all over the world spend a total of eight billion minutes a day on Facebook. Some 400 billion Web pages are viewed every month, 3.5 billion pieces of content are shared every week and the site logs a staggering 25TB of data every day. David Recordon, senior open programs manager at Facebook, talks about how the social networking giant uses open source tools to achieve its massive app scalablilty.

To solve these performance issues, Facebook developed a tool called HipHop that transforms PHP into optimised C++ code. HipHop is designed to give the best of both worlds -- the speed of an interpreted language and the performance of a compiled language like C++.

“What makes HipHop work really well is we can take the majority of our code base and greatly speed it up,” Recordon says.

Facebook has developed software in C++, as well as Python and Erlang, and found writing extensions for PHP can be difficult because the macros are not well documented.

When asked why the company did not develop applications in C++, MacVicar says the reason is due to the fact that Facebook’s code base is about 4 million lines of PHP, so "the first problem would be how to translate all of that to C++ without holding up development on the site".

With HipHop, Facebook's PHP code goes through a 7-step transformation process involving code optimisation and then compiling.

“We want to minimise the differences between PHP and HipHop so you can use either [and] support for Apache is on the roadmap,” MacVicar says.

Interestingly, Facebook’s adoption of HipHop, with its own embedded Web server, is now pushing Apache out of the stack.

“We’ve generally been using Apache with PHP, but HipHop has its own embedded Web server, which is a really simple Web server built on top of libevent. So now we have been moving to using that,” Recordon says.

HipHop only compiles on Linux, but MacVicar says someone from Microsoft has contacted Facebook and “hopefully they will contribute the Windows port”.

“Hopefully someone in the community will contribute the Mac port, or I will do it myself,” he says.

While Facebook attempts to make the most use of the components it has developed in a variety of languages -- C++, Erlang (used for the chat service), Java and Python – the company's philosophy is not to choose a single language when building infrastructure.

“The entire Web server infrastructure is PHP, but we use many different languages, from a backend infrastructure perspective, depending on the service,” Recordon says.

“Philosophically, we think about ‘when do we need a service’. Is it something that needs to be really quick? Is it something that currently has a big overhead in terms of our application layer deployment and maintenance? Do we want another failure point in our network? We need to balance those.”

Standard tools for common tasks

Facebook's extensive use of open source software has also fostered a culture of giving back to the community. HipHop is open source, as are many of the system administration and data integration tools that Facebook uses.

“We use Thrift to communicate between all [the] services we have, something we open sourced,” MacVicar says. “It is essentially an RPC server and can generate code for you in a number of languages.”

According to Recordon, Facebook looks at a variety of open source and commercial tools to manage its information, not just the free stuff.

With about 400 billion page views a month, Facebook logs a staggering 25TB of data every day. “We used Syslog for logging, and the logging server sort of exploded because we were logging so much information from all these page views,” Recordon says. “So we went and created Scribe and we are able to break up this funnel from a logging perspective. The logging information from servers will get routed into Scribe servers."

Once routed to the Scribe servers, Facebook log data is then condensed and stored in the Hadoop and Hive cluster to be used for future data analysis.

“Logging is a common problem people have so we also open sourced Scribe and it’s now being used by Twitter and they are contributing back to the project,” Recordon says.

Facebook’s other scaling challenge revolves around photo sharing which, like almost all aspects of the site, has ballooned to a massive scale. There are an estimated 40 billion photos hosted by Facebook, each stored in four resolutions for a total of 160 billion photos. In all, about 1.2 million photos are served every second.

“The first thing we did was NFS using a commercial solution, because you have to choose the battles you are going to fight," MacVicar says. “But unfortunately it just didn’t scale. It wasn’t that the commercial solution was bad, but the I/O was so high it simply didn’t work.”

To overcome this challenge, Facebook first optimised its file serving capability and then took a deeper look at how files are kept on a file system.

“We developed a system called Haystack, which allows us to serve photos with one physical read on the disk,” Recordon says. “So it doesn’t matter from a random data access perspective, it’s always one physical read to serve a file. It takes about 300MB of RAM to run this for every terabyte of photos we have. We went from 10 I/O operations per photo to one.”

Haystack is not an open source project yet, but Facebook is working on making it open source because “we really think it’s useful for a lot of sites both large and small,” Recordon says.

Data storage and analysis goes big time

Facebook’s infrastructure relies on memcache for faster database access and this acts as a “middle tier” between the Web servers and the databases.

Memcache was originally developed for the LiveJournal blogging service to improve the performance of its database-driven Web sites and is now used by many of the largest sites on the Internet, from Facebook to Wikipedia.

“It’s great, but our engineers need to use it in a smart manner,” Recordon says. “We currently serve about 120 million memcached requests per second, so we’re incredibly reliant on memcache to make Facebook work.”

The system was stress-tested when Facebook launch of its new personalised username capability in June 2009. Making good on its promise to give its millions of users an easier way to share their profiles, Facebook let people chose an alias for their online profiles on a first-come, first-serve basis. Members responded in droves, registering new user names at a rate of more than 550 a second. Within the first seven minutes, 345,000 people had claimed user names; within 15 minutes, 500,000 users had grabbed a name.

“We had about 200 million users at the time, so we asked 200 million users to access Facebook at the same time,” Recordon says. “This is like a denial of service attack that you bring upon yourself, but for us it was really a product launch. And a million usernames were assigned in the first hour.”

From a technical perspective, MacVicar describes memcache as “really robust” with some nice features, but says "we wanted to make it better”. As a result, Facebook engineers ported memcache to 64-bit because most of its memcached machines have 16GB of memory. This enhancement has since made it into the main memcache code base.

“We added multithreading so we could utilise all the cores on the processors, and another of the more interesting things we did was adding UDP support,” MacVicar says. “Sometime this year we will do another release of the memcache server and hopefully that will get merged into the upstream version.”

Recordon says memcache is a prime example of an open source technology that existed before Facebook burst onto the scene but that the company has since used extensively and also added to.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computingopen sourcedatabasesFacebookLinuxsoftware developmentmysqlphpapachelampmemcachedhigh performance computing (HPC)

More about Akamai TechnologiesAMPApacheApache Software FoundationFacebookGoogleHewlett-Packard AustraliaHPLinuxMicrosoftMySQLOracleWikipediaYahoo

Show Comments
[]