February 12, 2004

Blogstock

by skrenta at 3:07 PM

So all the top bloggers went off to Etech and blogged it while it happened. This sounds great, except that breathless, incomprehensible stream-of-conciousness session notes have replaced the usual writing and analysis on many of the blogs I visit. It feels like a friend went off to Woodstock and, midway through a great trip, called me to try to describe it.

February 9, 2004

Upgrades and Off-the-Air messages

by skrenta at 9:32 AM

Over the weekend we upgraded our infrastructure to support faster indexing and serving of the news. As part of this update, we had to take the site down for a few minutes. Sites generally have a standard message they put up when they're off-the-air; Mark Fletcher of Bloglines recently talked about his, the plumber.

Here at Topix we use the old Indian Head Test Pattern. I remember seeing this on the TV in the wee hours of the morning when I was a kid. This was generated in monoscope tube, a vacuum tube with a metal plate containing the test pattern image which was scanned to produce the output. Essentially it was a camera tube that could only see the image on the inside of the meta plate. Since vacuum tubes needed time to warm up, the test pattern generator was usually left on all the time, so the station could cut over to its output at a moment's notice. More about test pattern generation. Test patterns from around the world.

February 2, 2004

Memory resident

by skrenta at 10:48 PM

A while ago I was chatting with my old boss Wade about a nifty algorithm I found for incremental search engines, which piggybacked queued writes onto reads that the front end requests were issuing anyway, to minimize excess disk head seeks. I thought it was pretty cool.

Wade smacked me on the head (gently) and asked why I was even thinking about disk anymore. Disk is dead; just put the whole thing in RAM and forget about it, he said.

Orkut is wicked fast; Friendster isn't. How do you reliably make a scalable web service wicked fast? Easy: the whole thing has to be in memory, and user requests must never wait for disk.

A disk head seek is about 9ms, and the human perceptual threshold for what seems "instant" is around 50ms. So if you have just one head seek per user request, you can support at most 5 hits/second on that server before users start to notice latency. If you have a typical filesystem with a little database on top, you may be up to 3+ seeks per hit already. Forget caching; caching helps the second user, and doesn't work on systems with a "long tail" of zillions of seldom-accessed queries, like search.

It doesn't help that a lot of the scheduling algorithms found in standard OS and database software were developed when memory was scarce, and so are stingy about their use of it.

The hugely scalable AIM service stores everything in memory across a distributed cluster, with the relational database stuck off to the side, relegated to making backups of what's live in memory. Another example is Google itself; the full index is stored in memory. Servers mmap their state when they boot; no disk is involved in user requests after everything has been paged in.


The biggest RAM database of all...

An overlooked feature that made Google really cool in the beginning was their snippets. This is the excerpt of text that shows a few sample sentences from each web page matching your search. Google's snippets show just the part of the web page that have your search terms in them; other search engines before always showed the same couple of sentences from the start of the web page, no matter what you had searched for.

Consider the insane cost to implement this simple feature. Google has to keep a copy of every web page on the Internet on their servers in order to show you the piece of the web page where your search terms hit. Everything is served from RAM, only booted from disk. And they have multiple separate search clusters at their co-locations. This means that Google is currently storing multiple copies of the entire web in RAM. My napkin is hard to read with all these zeroes on it, but that's a lot of memory. Talk about barrier to entry.