Wednesday, November 12, 2014

Facebook: Design a chat

https://www.facebook.com/notes/facebook-engineering/chat-stability-and-scalability/51412338919
https://www.facebook.com/notes/facebook-engineering/facebook-chat/14218138919

Function 1: Real-time presence notification:


The most resource-intensive operation performed in a chat system is not sending messages. It is rather keeping each online user aware of the online-idle-offline states of their friends, so that conversations can begin.

The naive implementation of sending a notification to all friends whenever a user comes online or goes offline has a worst case cost of O(average friendlist size * peak users * churn rate) messages/second, where churn rate is the frequency with which users come online and go offline, in events/second. This is wildly inefficient to the point of being untenable, given that the average number of friends per user is measured in the hundreds, and the number of concurrent users during peak site usage is on the order of several millions.

Surfacing connected users' idleness greatly enhances the chat user experience but further compounds the problem of keeping presence information up-to-date. Each Facebook Chat user now needs to be notified whenever one of his/her friends 
(a) takes an action such as sending a chat message or loads a Facebook page (if tracking idleness via a last-active timestamp) or 
(b) transitions between idleness states (if representing idleness as a state machine with states like "idle-for-1-minute", "idle-for-2-minutes", "idle-for-5-minutes", "idle-for-10-minutes", etc.). 
Note that approach (a) changes the sending a chat message / loading a Facebook page from a one-to-one communication into a multicast to all online friends, while approach (b) ensures that users who are neither chatting nor browsing Facebook are nonetheless generating server load.

Real-time messaging:


Another challenge is ensuring the timely delivery of the messages themselves. The method we chose to get text from one user to another involves loading an iframe on each Facebook page, and having that iframe's Javascript make an HTTP GET request over a persistent connection that doesn't return until the server has data for the client. The request gets reestablished if it's interrupted or times out. This isn't by any means a new technique: it's a variation of Comet, specifically XHR long polling, and/or BOSH.

Having a large-number of long-running concurrent requests makes the Apache part of the standard LAMP stack a dubious implementation choice. Even without accounting for the sizeable overhead of spawning an OS process that, on average, twiddles its thumbs for a minute before reporting that no one has sent the user a message, the waiting time could be spent servicing 60-some requests for regular Facebook pages. The result of running out of Apache processes over the entire Facebook web tier is not pretty, nor is the dynamic configuration of the Apache process limits enjoyable.

Distribution, Isolation, and Failover:


Fault tolerance is a desirable characteristic of any big system: if an error happens, the system should try its best to recover without human intervention before giving up and informing the user. The results of inevitable programming bugs, hardware failures, et al., should be hidden from the user as much as possible and isolated from the rest of the system.

The way this is typically accomplished in a web application is by separating the model and the view: data is persisted in a database (perhaps with a separate in-memory cache), with each short-lived request retrieving only the parts relevant to that request. Because the data is persisted, a failed read request can be re-attempted. Cache misses and database failure can be detected by the non-database layers and either reported to the user or worked around using replication.

While this architecture works pretty well in general, it isn't as successful in a chat application due to the high volume of long-lived requests, the non-relational nature of the data involved, and the statefulness of each request.

For Facebook Chat, we rolled our own subsystem for logging chat messages (in C++) as well as an epoll-driven web server (in Erlang) that holds online users' conversations in-memory and serves the long-polled HTTP requests. Both subsystems are clustered and partitioned for reliability and efficient failover. Why Erlang? In short, because the problem domain fits Erlang like a glove. Erlang is a functional concurrency-oriented language with extremely low-weight user-space "processes", share-nothing message-passing semantics, built-in distribution, and a "crash and recover" philosophy proven by two decades of deployment on large soft-realtime production systems.

Glueing with Thrift:


Despite those advantages, using Erlang for a component of Facebook Chat had a downside: that component needed to communicate with the other parts of the system. Glueing together PHP, Javascript, Erlang, and C++ is not a trivial matter. Fortunately, we have Thrift. Thrift translates a service description into the RPC glue code necessary for making cross-language calls (marshalling arguments and responses over the wire) and has templates for servers and clients. Since going open source a year ago (we had the gall to release it on April Fool's Day, 2007), the Thrift project has steadily grown and improved (with multiple iterations on the Erlang binding). Having Thrift available freed us to split up the problem of building a chat system and use the best available tool to approach each sub-problem. 

Ramping up:


The secret for going from zero to seventy million users overnight is to avoid doing it all in one fell swoop. We chose to simulate the impact of many real users hitting many machines by means of a "dark launch" period in which Facebook pages would make connections to the chat servers, query for presence information and simulate message sends without a single UI element drawn on the page. With the "dark launch" bugs fixed, we hope that you enjoy Facebook Chat now that the UI lights have been turned on.


原文地址:
http://www.cnblogs.com/piaoger/archive/2012/08/19/2646530.html
关于Facebook Chat的文章在InfoIQ已经出现很久很久了,正好Piaoger有看到了Facebook那位仁兄在Erlang-Factory上的一个PPT,结合起来看了看,还是有些用。

# Keywords
Realtime messaging, C++, Erlang, Long-polling, Thrift

# Challenges
▪ How does synchronous messaging work on the Web?
▪ “Presence” is hard to scale
▪ Need a system to queue and deliver messages
▪ Millions of connections, mostly idle
▪ Need logging, at least between page loads
▪ Make it work in Facebook’s environment
在用户上线或者下线时通知其所有好友的做法是非常幼稚可笑的,这么做的代价是O(平均好友个数×高峰期用户数×上下线频率) 条短信/秒, 上下线频率是指用户平均每秒上线和下线的次数。当每个用户好友的平均数量大约在几百个,高峰期同时在线用户数在百万数量级的时候,这种实现方法的效率简直 低得无法忍受。

Piaoger:
什么时候,我所工作着的Online Prouct也会有下面的困惑,那将是痛并快乐着:
在当产品的客户有可能在一夜之间从零增加到七千万的时候,可扩展性就变为从一开始就必须考虑的问题。

# System Overview

system Overview (Front-end)
▪ Mix of client-side Javascript and server-side PHP
▪ Regular AJAX for sending messages, fetching conversation history
▪ Periodic AJAX polling for list of online friends
▪ AJAX long-polling for messages (Comet)

System overview (Back-end)
▪ Discrete responsibilities for each service
    - Communicate via Thrift
▪ Channel (Erlang): message queuing and delivery
    - Queue messages in each user’s “channel”
    - Deliver messages as responses to long-polling HTTP requests
▪ Presence (C++): aggregates online info in memory (pull-based presence)
▪ Chatlogger (C++): stores conversations between page loads
▪ Web tier (PHP): serves our vanilla web requests
在在集群和分区子系统上,Facebook选择了C++和Erlang的组合。C++模块用户用于记录聊天信息,而Erlang模块“将在线用户的对话保存在内存中并且对长时间轮询(long-polled)请求提供支持”。

# Realtime Messaging
Facebook采用的是客户端直接从服务器将新消息“拉”的方式,跟Comet的XHR长时间轮询(Comet's XHR Long Polling)过程比较相似.
Facebook的页面会加载一个iframe用于用户间消息的传递, 这个iframe中的Javascript代码发出一个HTTP GET请求,这个请求将建立与服务器的一个持久连接,直到有消息返回给用户为止。

# Channel Server Architecture
Overview
▪ One channel per user
▪ Web tier delivers messages for that user
▪ Channel State: short queue of sequenced messages
▪ Long poll for streaming (Comet)
▪ Clients make an HTTP request
    - Server replies when a message is ready
    - One active request per browser tab
Details
▪ Distributed design
    - User id space is partitioned (division of labor)
    - Each partition is serviced by a cluster (availability)
▪ Presence aggregation
    - Channel servers are authoritative
    - Periodically shipped to presence servers
▪ Open source: Erlang, Mochiweb, Thrift, Scribe, fb303, et al.

Channel Servers

Channel Applcations

# Dark launch
启动这项服务的方式也比较有意思——利用所谓的“摸黑启动(dark launch)” 。
一夜间就将客户数由零变为七千万的秘密就在于避免一步到位地完成这个过程。我们会首先模拟很多用户访问的场景,这是通过一个叫做“摸黑启动”的阶段实现的.在这个阶段中Facebook的页面会在没有任何UI元素的情况下连接聊天服务器,询问在线信息和模拟信息发送过程。
Piaoger: 这个玩意儿,和我们的Warmup是不是有得一比??

# References

No comments:

Post a Comment