In my head, this post and yesterday's post on risk and opportunity are deeply connected, but logically they needed to be split apart.
The theory of the left-brain / right-brain split is that the left hemisphere of our brain handles linear, logical processing (cold logic) while the right hemisphere is more emotional, intuitive, and holistic (evaluating the whole picture instead of considering things one component at a time). Naturally, some people are more left-brain dominant while others are more right-brain dominant. This divide is discussed quite a bit elsewhere -- I recommend starting with the TED talk by Jill Bolte Taylor, a neuroanatomist whose left hemisphere was damaged by a stroke, causing her to become right-brain dominant.
I'm actually somewhat skeptical that the left-brain / right-brain split is as real as people assume, however it seems to be metaphorically correct, so for my non-surgical purposes, it's "good enough".
To me, one of the most interesting aspects of this right/left divide is that many people seem to identify strongly with one side or the other, and actually despise the other half of their brain (see here for a few examples, and even Jill Taylor seems to be doing it to some extent). This seems kind of dumb. My theory is that both halves of our brain are useful, and that for maximum benefit and happiness, we should learn how to use each half to its maximum potential.
This is where I link in to yesterday's post on Risk and Opportunity. My suggestion was to simultaneously seek big, exciting opportunities ("dream big"), while carefully avoiding unacceptable risks ("don't be stupid"). In my mind, that is the right/left divide.
The left-brain ability to carefully double-check logic and evaluate the risks is very important because it helps to protect us from bad decisions. When we imagine the kind of person who believes things that are obviously false, falls for scams, ends up joining a cult, etc, we probably picture a stereotypically right-brain person.
However, what the left brain has in cold, efficient logic, it lacks in passion and grandiosity.
When I wrote about evaluating risks and opportunities, it was as though we use a logical process when make decisions, but of course that's not actually true, nor should it be. Our actual decision making is much more emotional (and emotions are just another mental process).
The right-brain utility is in integrating millions of facts (more than the left brain can logically combine) and producing a unified output. However, that output is in the form of an intuition, "gut feeling", or just plain excitement, which can sometimes be difficult to communicate or justify ("it seems like a good idea" isn't always convincing). Nevertheless, these intuitions are crucial for making big conceptual leaps, and ultimately providing direction and meaning in our lives.
So to reformulate yesterday's advice, I think we do best when using our right-brain skills to discover opportunity and excitement, while also engaging our left-brain abilities to avoid disasters, find tactical advantages, and rationalize our actions to the world. Left and Right are both stuck in the same skull, but not by accident -- they actually need each other. (the same could probably be said for politics, but that would be another post)
Coincidentally, I just saw another good TED talk that mentions these right-brain/left-brain issues in the context of managing and incentivizing creative people. It's worth watching.
Sunday, September 13, 2009
Saturday, September 12, 2009
Evaluating risk and opportunity (as a human)
Our lives are full of decisions that force us to balance risk and opportunity: should you take that new job, buy that house, invest in that company, swallow that pill, jump off that cliff, etc. How do we decide which risks are smart, and which are dumb? Once we've made our choices, are we willing to accept the consequences?
I think the most common technique is to ask ourselves, "What is the most likely outcome?", and if that outcome is good, then we do it (to the extent that people actually reason through decisions at all). That works well enough for many decisions -- for example, you might believe that the most likely outcome of going to school is that you can get a better job later on, and therefore choose that path. That's the reasoning most people use when going to school, getting a job, buying a house, or making most other "normal" decisions. Since it focuses on the "expected" outcome, people using it often ignore the possible bad outcomes, and when something bad does happen, they may feel bitter or cheated ("I have a degree, now where's my job!?"). For example, most people buying houses a couple of years ago weren't considering the possibility that their new house would lose 20% of its value, and that they would end up owing more than the house was worth.
When advising on startups, I often tell people that they should start with the assumption that the startup will fail and all of their equity will become worthless. Many people have a hard time accepting that fact, and say that they would be unable to stay motivated if they believed such a thing. It seems unfortunate that these people feel the need to lie to themselves in order to stay motivated, but recently I realized that I'm just using a different method of evaluating risks and opportunities.
Instead of asking, "What's the most likely outcome?", I like to ask "What's the worst that could happen?" and "Could it be awesome?". Essentially, instead of evaluating the median outcome, I like to look at the 0.01 percentile and 95th percentile outcomes. In the case of a startup, the worst case outcome is generally that you will lose your entire investment (but learn a lot), and the best case is that you make a large pile of money, create something cool, and learn a lot. (see "Why I'd rather be wrong" for more on this)
Thinking about the best-case outcomes is easy and people do it a lot, which is part of the reason it's often disrespected ("dreamer" isn't usually a compliment). However, too many people ignore the worst case scenario because thinking about bad things is uncomfortable. This is a mistake. This is why we see people killing themselves over investment losses (part of the reason, anyway). They were not planning for the worst case. Thinking about the worst case not only protects us from making dumb mistakes, it also provides an emotional buffer. If I'm comfortable with the worst-case outcome, then I can move without fear and focus my attention on the opportunity.
Considering only the best and worst case outcomes is not perfect of course -- lottery tickets have an acceptable worst case (you lose a $1) and a great best case (you win millions), yet they are generally a bad deal. Ideally we would also consider the "expected value" of our decisions, but in practice that's impossible for most real decisions because the world is too complicated and math is hard. If the expected value is available (as it is for lottery tickets), then use it (and don't buy lottery tickets), but otherwise we need some heuristics. Here are some of mine:
I've been told that I'm extremely cynical. I've also been told that I'm unreasonably optimistic. Upon reflection, I think I'm ok with being a cynical optimist :)
By the way, here's why I chose the 0.01 percentile outcome when evaluating the worst case: Last year there were 37,261 motor vehicle fatalities in the United States. The population of the United States is 304,059,724, so my odds of getting killed in a car accident is very roughly 1/10,000 per year (of course many of those people were teenagers and alcoholics, so my odds are probably a little better than that, but as a rough estimate it's good). Using this logic, I can largely ignore obscure 1/1,000,000 risks, which are too numerous and difficult to protect against anyway.
Also see The other half of the story
I think the most common technique is to ask ourselves, "What is the most likely outcome?", and if that outcome is good, then we do it (to the extent that people actually reason through decisions at all). That works well enough for many decisions -- for example, you might believe that the most likely outcome of going to school is that you can get a better job later on, and therefore choose that path. That's the reasoning most people use when going to school, getting a job, buying a house, or making most other "normal" decisions. Since it focuses on the "expected" outcome, people using it often ignore the possible bad outcomes, and when something bad does happen, they may feel bitter or cheated ("I have a degree, now where's my job!?"). For example, most people buying houses a couple of years ago weren't considering the possibility that their new house would lose 20% of its value, and that they would end up owing more than the house was worth.
When advising on startups, I often tell people that they should start with the assumption that the startup will fail and all of their equity will become worthless. Many people have a hard time accepting that fact, and say that they would be unable to stay motivated if they believed such a thing. It seems unfortunate that these people feel the need to lie to themselves in order to stay motivated, but recently I realized that I'm just using a different method of evaluating risks and opportunities.
Instead of asking, "What's the most likely outcome?", I like to ask "What's the worst that could happen?" and "Could it be awesome?". Essentially, instead of evaluating the median outcome, I like to look at the 0.01 percentile and 95th percentile outcomes. In the case of a startup, the worst case outcome is generally that you will lose your entire investment (but learn a lot), and the best case is that you make a large pile of money, create something cool, and learn a lot. (see "Why I'd rather be wrong" for more on this)
Thinking about the best-case outcomes is easy and people do it a lot, which is part of the reason it's often disrespected ("dreamer" isn't usually a compliment). However, too many people ignore the worst case scenario because thinking about bad things is uncomfortable. This is a mistake. This is why we see people killing themselves over investment losses (part of the reason, anyway). They were not planning for the worst case. Thinking about the worst case not only protects us from making dumb mistakes, it also provides an emotional buffer. If I'm comfortable with the worst-case outcome, then I can move without fear and focus my attention on the opportunity.
Considering only the best and worst case outcomes is not perfect of course -- lottery tickets have an acceptable worst case (you lose a $1) and a great best case (you win millions), yet they are generally a bad deal. Ideally we would also consider the "expected value" of our decisions, but in practice that's impossible for most real decisions because the world is too complicated and math is hard. If the expected value is available (as it is for lottery tickets), then use it (and don't buy lottery tickets), but otherwise we need some heuristics. Here are some of mine:
- Will I learn a lot from the experience? (failure can be very educational)
- Will it make my life more interesting? (a predictable life is a boring life)
- Is it good for the world? (even if I don't benefit, maybe someone else will)
I've been told that I'm extremely cynical. I've also been told that I'm unreasonably optimistic. Upon reflection, I think I'm ok with being a cynical optimist :)
By the way, here's why I chose the 0.01 percentile outcome when evaluating the worst case: Last year there were 37,261 motor vehicle fatalities in the United States. The population of the United States is 304,059,724, so my odds of getting killed in a car accident is very roughly 1/10,000 per year (of course many of those people were teenagers and alcoholics, so my odds are probably a little better than that, but as a rough estimate it's good). Using this logic, I can largely ignore obscure 1/1,000,000 risks, which are too numerous and difficult to protect against anyway.
Also see The other half of the story
Friday, April 17, 2009
Make your site faster and cheaper to operate in one easy step
Is your web server using using gzip encoding? Surprisingly, many are not. I just wrote a little script to fetch the 30 external links off news.yc and check if they are using gzip encoding. Only 18 were, which means that the other 12 sites are needlessly slow, and also wasting money on bandwidth.
Check your site here.
Some people think gzip is "too slow". It's not. Here's an example (run on my laptop) using data from one of the links on news.ycombinator.com:
$ cat < /tmp/sd.html | wc -c
146117
$ gzip < /tmp/sd.html | wc -c
35481
$ time gzip < /tmp/sd.html >/dev/null
real 0m0.009s
user 0m0.004s
sys 0m0.004s
It took 9ms to compress 146,117 bytes of html (and that includes process creation time, etc), and the compressed data was only about 24% the size of the input. At that rate, compressing 1GB of data would require about 66 seconds of cpu time. Repeating the test with a much larger file results yields about 42 sec/GB, so 66 sec is not an unreasonable estimate.
Inevitably, someone will argue that they can't spare a few ms per page to compress the data, even though it will make their site much more responsive. However, it occured to me today that thanks to Amazon, it's very easy to compare CPU vs Bandwidth. According to their pricing page, a "small" (single core) instance cost $0.10 / hour, and data transfer out costs $0.17 / GB (though it goes down to $0.10 / GB if you use over 150 TB / month, which you probably don't).
Using these numbers, we can estimate that it would cost $1.88 to gzip 1TB of data on Amazon EC2, and $174 to transfer 1TB of data. If you instead compress your data (and get 4-to-1 compression, which is not unusual for html), the bandwidth will only cost $43.52.
Summary:
with gzip: $1.88 for cpu + $43.52 for bandwidth = $45.40 + happier users
without gzip: $174.00 for bandwidth = $128.60 wasted + less happy users
The other excuse for not gzipping content is that your webserver doesn't support it for some reason. Fortunately, there's a simple solution: put nginx in front of your servers. That's what we do at FriendFeed, and it works very well (we use a custom, epoll-based python server). Nginx acts as a proxy -- outside requests connect to nginx, and nginx connects to whatever webserver you are already using (and along the way it will compress your response, and do other good stuff).
Check your site here.
Some people think gzip is "too slow". It's not. Here's an example (run on my laptop) using data from one of the links on news.ycombinator.com:
$ cat < /tmp/sd.html | wc -c
146117
$ gzip < /tmp/sd.html | wc -c
35481
$ time gzip < /tmp/sd.html >/dev/null
real 0m0.009s
user 0m0.004s
sys 0m0.004s
It took 9ms to compress 146,117 bytes of html (and that includes process creation time, etc), and the compressed data was only about 24% the size of the input. At that rate, compressing 1GB of data would require about 66 seconds of cpu time. Repeating the test with a much larger file results yields about 42 sec/GB, so 66 sec is not an unreasonable estimate.
Inevitably, someone will argue that they can't spare a few ms per page to compress the data, even though it will make their site much more responsive. However, it occured to me today that thanks to Amazon, it's very easy to compare CPU vs Bandwidth. According to their pricing page, a "small" (single core) instance cost $0.10 / hour, and data transfer out costs $0.17 / GB (though it goes down to $0.10 / GB if you use over 150 TB / month, which you probably don't).
Using these numbers, we can estimate that it would cost $1.88 to gzip 1TB of data on Amazon EC2, and $174 to transfer 1TB of data. If you instead compress your data (and get 4-to-1 compression, which is not unusual for html), the bandwidth will only cost $43.52.
Summary:
with gzip: $1.88 for cpu + $43.52 for bandwidth = $45.40 + happier users
without gzip: $174.00 for bandwidth = $128.60 wasted + less happy users
The other excuse for not gzipping content is that your webserver doesn't support it for some reason. Fortunately, there's a simple solution: put nginx in front of your servers. That's what we do at FriendFeed, and it works very well (we use a custom, epoll-based python server). Nginx acts as a proxy -- outside requests connect to nginx, and nginx connects to whatever webserver you are already using (and along the way it will compress your response, and do other good stuff).
Thursday, January 22, 2009
Communicating with code
Some people can sell their ideas with a brilliant speech or a slick powerpoint presentation.
I can't.
Maybe that's why I'm skeptical of ideas that are sold via brilliant speeches and slick powerpoints. Or maybe it's because it's too easy to overlook the messy details, or to get caught up in details that seem very important, but aren't. I also get very bored by endless debate.
We did a lot of things wrong during the 2.5 years of pre-launch Gmail development, but one thing we did very right was to always have live code. The first version of Gmail was literally written in a day. It wasn't very impressive -- all I did was take the Google Groups (Usenet search) code (my previous project) and stuff my email into it -- but it was live and people could use it (to search my mail...). From that day until launch, every new feature went live immediately, and most new ideas were implemented as soon as possible. This resulted in a lot of churn -- we re-wrote the frontend about six times and the backend three times by launch -- but it meant that we had direct experience with all of the features. A lot of features seemed like great ideas, until we tried them. Other things seemed like they would be big problems or very confusing, but once they were in we forgot all about the theoretical problems.
The great thing about this process was that I didn't need to sell anyone on my ideas. I would just write the code, release the feature, and watch the response. Usually, everyone (including me) would end up hating whatever it was (especially my ideas), but we always learned something from the experience, and we were able to quickly move on to other ideas.
The most dramatic example of this process was the creation of content targeted ads (now known as "AdSense", or maybe "AdSense for Content"). The idea of targeting our keyword based ads to arbitrary content on the web had been floating around the company for a long time -- it was "obvious". However, it was also "obviously bad". Most people believed that it would require some kind of fancy artificial intelligence to understand the content well enough to target ads, and even if we had that, nobody would click on the ads. I thought they were probably right.
However, we needed a way for Gmail to make money, and Sanjeev Singh kept talking about using relevant ads, even though it was obviously a "bad idea". I remained skeptical, but thought that it might be a fun experiment, so I connected to that ads database (I assure you, random engineers can no longer do this!), copied out all of the ads+keywords, and did a little bit of sorting and filtering with some unix shell commands. I then hacked up the "adult content" classifier that Matt Cutts and I had written for safe-search, linked that into the Gmail prototype, and then loaded the ads data into the classifier. My change to the classifier (which completely broke its original functionality, but this was a separate code branch) changed it from classifying pages as "adult", to classifying them according to which ad was most relevant. The resulting ad was then displayed in a little box on our Gmail prototype ui. The code was rather ugly and hackish, but more importantly, it only took a few hours to write!
I then released the feature on our unsuspecting userbase of about 100 Googlers, and then went home and went to sleep. The response when I returned the next day was not what I would classify as "positive". Someone may have used the word "blasphemous". I liked the ads though -- they were amusing and often relevant. An email from someone looking for their lost sunglasses got an ad for new sunglasses. The lunch menu had an ad for balsamic vinegar.
More importantly, I wasn't the only one who found the ads surprisingly relevant. Suddenly, content targeted ads switched from being a lowest-priority project (unstaffed, will not do) to being a top priority project, an extremely talented team was formed to build the project, and within maybe six months a live beta was launched. Google's content targeted ads are now a big business with billions of dollars in revenue (I think).
Of course none of the code from my prototype ever made it near the real product (thankfully), but that code did something that fancy arguments couldn't do (at least not my fancy arguments), it showed that the idea and product had real potential.
The point of this story, I think, is that you should consider spending less time talking, and more time prototyping, especially if you're not very good at talking or powerpoint. Your code can be a very persuasive argument.
The other point is that it's important to make prototyping new ideas, especially bad ideas, as fast and easy as possible. This can be especially difficult as a product grows. It was easy for me to stuff random broken features into Gmail when there were only about 100 users and they all worked for Google, but it's not so simple when there are 100 million users.
Fortunately for Gmail, they've recently found a rather clever solution that enables the thousands of Google engineers to add new ui features: Gmail Labs. This is also where Google's "20% time" comes in -- if you want innovation, it's critical that people are able to work on ideas that are unapproved and generally thought to be stupid. The real value of "20%" is not the time, but rather the "license" it gives to work on things that "aren't important". (perhaps I should do a post on "20% time" at some point...)
One of the best ways to enable prototyping and innovation on an established product is though an API. Twitter is possibly the best example of how well this can work. There are thousands of different Twitter clients, with new ones being written every day, and I believe a majority of Twitter messages are entered though one of these third-party clients.
Public APIs enable everyone to experiment with new ideas and create new ways of using your product. This is incredibly powerful because no matter how brilliant you and your coworkers are, there are always going to be smarter people outside of your company.
At FriendFeed, we discovered that our API does more than enable great apps, it also reveals great app developers. Gary and Ben were both writing FriendFeed apps using our API before we hired them. When hiring, you don't have to guess which people are "smart and gets things done", you can simply observe it in the wild :)
In my previous post, I asked people to describe their "ideal FriendFeed". Since then, I've been thinking about ideas for my "ideal FriendFeed". Unfortunately, it's very difficult for me to know how much I like an idea based only on words or mockups -- I really need to try it out. So in the spirit of prototyping, I've used my spare time to write a simple FriendFeed interface that prototypes some of the things I've been thinking about. This interface isn't the "future of FriendFeed", it's just a collection of ideas, some that I like, and some that I don't. One thing that's kind of cool about it (from a prototyping perspective) is that it's written entirely in Javascript running in the web browser -- it's just a single web page that uses FriendFeed's JSON APIs to fetch data. This also means that it's relatively easy for other people to copy and change -- you don't even need a server!
If you'd like to try it out, you can see everyone that I'm subscribed to (assuming their feed is public), or if you are a FriendFeed user, you can see all of your public subscriptions by going to http://paulbuchheit.github.com/xfeed.html#YOUR_NICKNAME_GOES_HERE. The complete source code (which is just several hundred lines of HTML and JS) is here. In this prototype, I'm experimenting with treating entries, comments, and likes all as simple "messages", only showing comments from the user's friends (which can be a little confusing), and putting it all in reverse-chronological order. As I mentioned, this interface isn't the "future of FriendFeed", it's just a collection of ideas that I'm playing with.
If you're interested in prototyping something, feel free to take this code and have your way with it. As always, I'd love to see your prototypes action!
I can't.
Maybe that's why I'm skeptical of ideas that are sold via brilliant speeches and slick powerpoints. Or maybe it's because it's too easy to overlook the messy details, or to get caught up in details that seem very important, but aren't. I also get very bored by endless debate.
We did a lot of things wrong during the 2.5 years of pre-launch Gmail development, but one thing we did very right was to always have live code. The first version of Gmail was literally written in a day. It wasn't very impressive -- all I did was take the Google Groups (Usenet search) code (my previous project) and stuff my email into it -- but it was live and people could use it (to search my mail...). From that day until launch, every new feature went live immediately, and most new ideas were implemented as soon as possible. This resulted in a lot of churn -- we re-wrote the frontend about six times and the backend three times by launch -- but it meant that we had direct experience with all of the features. A lot of features seemed like great ideas, until we tried them. Other things seemed like they would be big problems or very confusing, but once they were in we forgot all about the theoretical problems.
The great thing about this process was that I didn't need to sell anyone on my ideas. I would just write the code, release the feature, and watch the response. Usually, everyone (including me) would end up hating whatever it was (especially my ideas), but we always learned something from the experience, and we were able to quickly move on to other ideas.
The most dramatic example of this process was the creation of content targeted ads (now known as "AdSense", or maybe "AdSense for Content"). The idea of targeting our keyword based ads to arbitrary content on the web had been floating around the company for a long time -- it was "obvious". However, it was also "obviously bad". Most people believed that it would require some kind of fancy artificial intelligence to understand the content well enough to target ads, and even if we had that, nobody would click on the ads. I thought they were probably right.
However, we needed a way for Gmail to make money, and Sanjeev Singh kept talking about using relevant ads, even though it was obviously a "bad idea". I remained skeptical, but thought that it might be a fun experiment, so I connected to that ads database (I assure you, random engineers can no longer do this!), copied out all of the ads+keywords, and did a little bit of sorting and filtering with some unix shell commands. I then hacked up the "adult content" classifier that Matt Cutts and I had written for safe-search, linked that into the Gmail prototype, and then loaded the ads data into the classifier. My change to the classifier (which completely broke its original functionality, but this was a separate code branch) changed it from classifying pages as "adult", to classifying them according to which ad was most relevant. The resulting ad was then displayed in a little box on our Gmail prototype ui. The code was rather ugly and hackish, but more importantly, it only took a few hours to write!
I then released the feature on our unsuspecting userbase of about 100 Googlers, and then went home and went to sleep. The response when I returned the next day was not what I would classify as "positive". Someone may have used the word "blasphemous". I liked the ads though -- they were amusing and often relevant. An email from someone looking for their lost sunglasses got an ad for new sunglasses. The lunch menu had an ad for balsamic vinegar.
More importantly, I wasn't the only one who found the ads surprisingly relevant. Suddenly, content targeted ads switched from being a lowest-priority project (unstaffed, will not do) to being a top priority project, an extremely talented team was formed to build the project, and within maybe six months a live beta was launched. Google's content targeted ads are now a big business with billions of dollars in revenue (I think).
Of course none of the code from my prototype ever made it near the real product (thankfully), but that code did something that fancy arguments couldn't do (at least not my fancy arguments), it showed that the idea and product had real potential.
The point of this story, I think, is that you should consider spending less time talking, and more time prototyping, especially if you're not very good at talking or powerpoint. Your code can be a very persuasive argument.
The other point is that it's important to make prototyping new ideas, especially bad ideas, as fast and easy as possible. This can be especially difficult as a product grows. It was easy for me to stuff random broken features into Gmail when there were only about 100 users and they all worked for Google, but it's not so simple when there are 100 million users.
Fortunately for Gmail, they've recently found a rather clever solution that enables the thousands of Google engineers to add new ui features: Gmail Labs. This is also where Google's "20% time" comes in -- if you want innovation, it's critical that people are able to work on ideas that are unapproved and generally thought to be stupid. The real value of "20%" is not the time, but rather the "license" it gives to work on things that "aren't important". (perhaps I should do a post on "20% time" at some point...)
One of the best ways to enable prototyping and innovation on an established product is though an API. Twitter is possibly the best example of how well this can work. There are thousands of different Twitter clients, with new ones being written every day, and I believe a majority of Twitter messages are entered though one of these third-party clients.
Public APIs enable everyone to experiment with new ideas and create new ways of using your product. This is incredibly powerful because no matter how brilliant you and your coworkers are, there are always going to be smarter people outside of your company.
At FriendFeed, we discovered that our API does more than enable great apps, it also reveals great app developers. Gary and Ben were both writing FriendFeed apps using our API before we hired them. When hiring, you don't have to guess which people are "smart and gets things done", you can simply observe it in the wild :)
In my previous post, I asked people to describe their "ideal FriendFeed". Since then, I've been thinking about ideas for my "ideal FriendFeed". Unfortunately, it's very difficult for me to know how much I like an idea based only on words or mockups -- I really need to try it out. So in the spirit of prototyping, I've used my spare time to write a simple FriendFeed interface that prototypes some of the things I've been thinking about. This interface isn't the "future of FriendFeed", it's just a collection of ideas, some that I like, and some that I don't. One thing that's kind of cool about it (from a prototyping perspective) is that it's written entirely in Javascript running in the web browser -- it's just a single web page that uses FriendFeed's JSON APIs to fetch data. This also means that it's relatively easy for other people to copy and change -- you don't even need a server!

If you'd like to try it out, you can see everyone that I'm subscribed to (assuming their feed is public), or if you are a FriendFeed user, you can see all of your public subscriptions by going to http://paulbuchheit.github.com/xfeed.html#YOUR_NICKNAME_GOES_HERE. The complete source code (which is just several hundred lines of HTML and JS) is here. In this prototype, I'm experimenting with treating entries, comments, and likes all as simple "messages", only showing comments from the user's friends (which can be a little confusing), and putting it all in reverse-chronological order. As I mentioned, this interface isn't the "future of FriendFeed", it's just a collection of ideas that I'm playing with.
If you're interested in prototyping something, feel free to take this code and have your way with it. As always, I'd love to see your prototypes action!
Tuesday, January 6, 2009
If you're the kind of person who likes to vote...
Now is your opportunity!
FriendFeed was nominated for three "Crunchies". Please vote for us in all three categories:



I can't promise that your vote will end the war, fix the economy, or save the environment (that one is here), but I can promise that your vote might be counted.
FriendFeed was nominated for three "Crunchies". Please vote for us in all three categories:



I can't promise that your vote will end the war, fix the economy, or save the environment (that one is here), but I can promise that your vote might be counted.
Sunday, January 4, 2009
Overnight success takes a long time
For some reason, this weekend has seen a lot of talk about what FriendFeed is/isn't/should be doing (see Louis Gray and others). One person even predicted that we will fail.
I considered writing my own list of complaints about FriendFeed. I think and care about it a lot more than most people, so my list of FriendFeed issues would be a lot longer. I may still do that, but there's something else also worth discussing...
One of the benefits of experience is that it gives some degree of perspective. Of course there's a huge risk of overgeneralizing (someone took a picture!), but with that in mind...
We starting working on Gmail in August (or September?) 2001. For a long time, almost everyone disliked it. Some people used it anyway because of the search, but they had endless complaints. Quite a few people thought that we should kill the project, or perhaps "reboot" it as an enterprise product with native client software, not this crazy Javascript stuff. Even when we got to the point of launching it on April 1, 2004 (two and a half years after starting work on it), many people inside of Google were predicting doom. The product was too weird, and nobody wants to change email services. I was told that we would never get a million users.
Once we launched, the response was surprisingly positive, except from the people who hated it for a variety of reasons. Nevertheless, it was frequently described as "niche", and "not used by real people outside of silicon valley".
Now, almost 7 and a half years after we started working on Gmail, I see things like this:
And that probably isn't counting all of the "Apps for your domain" users. I still have a huge list of complaints about Gmail, by the way.
It would be a huge mistake for me to assume that just because Gmail did eventually take off, then the same thing will happen to FriendFeed. They are very different products, and maybe we just got lucky with Gmail.
However, it does give some perspective. Creating an important new product generally takes time. FriendFeed needs to continue changing and improving, just as Gmail did six years ago (there are some screenshots around if you don't believe me). FriendFeed shows a lot of promise, but it's still a "work in progress".
My expectation is that big success takes years, and there aren't many counter-examples (other than YouTube, and they didn't actually get to the point of making piles of money just yet). Facebook grew very fast, but it's almost 5 years old at this point. Larry and Sergey started working on Google in 1996 -- when I started there in 1999, few people had heard of it yet.
This notion of overnight success is very misleading, and rather harmful. If you're starting something new, expect a long journey. That's no excuse to move slow though. To the contrary, you must move very fast, otherwise you will never arrive, because it's a long journey! This is also why it's important to be frugal -- you don't want to starve to death half the way up the mountain.
Getting back to FriendFeed, I'm always concerned when I hear complaints about the service. However, I'm also encouraged by the complaints, because it means that people care about the product. In fact, they care so much that they write long blog posts about what we should do differently. It's clear that our product isn't quite right and needs to evolve, but the fact that people are giving it so much thought tells me that we are at least headed in roughly the right direction. I would be much more concerned if there were silence and nobody cared about what we are doing -- it would mean that we are "off in the weeds", as they say. Getting this kind of valuable feedback is one of the major benefits of launching early.
If you'd like to contribute (and I hope you do), I'd love to read more of your visions of "the perfect FriendFeed". Describe what would make FriendFeed perfect for YOU, and post it on your blog (or email post@posterous.com if you don't have a blog -- they create them automatically). Feel free to drop or change features in any way you like. Yes, technically you're doing my work for me, but it's mutually beneficial because we'll do our best to create a product that you like, and even if we don't, maybe someone else will (since the concepts are out there for everyone).
I considered writing my own list of complaints about FriendFeed. I think and care about it a lot more than most people, so my list of FriendFeed issues would be a lot longer. I may still do that, but there's something else also worth discussing...
One of the benefits of experience is that it gives some degree of perspective. Of course there's a huge risk of overgeneralizing (someone took a picture!), but with that in mind...
We starting working on Gmail in August (or September?) 2001. For a long time, almost everyone disliked it. Some people used it anyway because of the search, but they had endless complaints. Quite a few people thought that we should kill the project, or perhaps "reboot" it as an enterprise product with native client software, not this crazy Javascript stuff. Even when we got to the point of launching it on April 1, 2004 (two and a half years after starting work on it), many people inside of Google were predicting doom. The product was too weird, and nobody wants to change email services. I was told that we would never get a million users.
Once we launched, the response was surprisingly positive, except from the people who hated it for a variety of reasons. Nevertheless, it was frequently described as "niche", and "not used by real people outside of silicon valley".
Now, almost 7 and a half years after we started working on Gmail, I see things like this:
Yahoo and Microsoft have more than 250m users each worldwide for their webmail, according to the comScore research firm, compared to close to 100m for Gmail. But Google's younger service, launched in 2004, has been gaining ground in the US over the past year, with users growing by more than 40 per cent, compared to 2 per cent for Yahoo and a 7 per cent fall in users of Microsoft's webmail.
And that probably isn't counting all of the "Apps for your domain" users. I still have a huge list of complaints about Gmail, by the way.
It would be a huge mistake for me to assume that just because Gmail did eventually take off, then the same thing will happen to FriendFeed. They are very different products, and maybe we just got lucky with Gmail.
However, it does give some perspective. Creating an important new product generally takes time. FriendFeed needs to continue changing and improving, just as Gmail did six years ago (there are some screenshots around if you don't believe me). FriendFeed shows a lot of promise, but it's still a "work in progress".
My expectation is that big success takes years, and there aren't many counter-examples (other than YouTube, and they didn't actually get to the point of making piles of money just yet). Facebook grew very fast, but it's almost 5 years old at this point. Larry and Sergey started working on Google in 1996 -- when I started there in 1999, few people had heard of it yet.
This notion of overnight success is very misleading, and rather harmful. If you're starting something new, expect a long journey. That's no excuse to move slow though. To the contrary, you must move very fast, otherwise you will never arrive, because it's a long journey! This is also why it's important to be frugal -- you don't want to starve to death half the way up the mountain.
Getting back to FriendFeed, I'm always concerned when I hear complaints about the service. However, I'm also encouraged by the complaints, because it means that people care about the product. In fact, they care so much that they write long blog posts about what we should do differently. It's clear that our product isn't quite right and needs to evolve, but the fact that people are giving it so much thought tells me that we are at least headed in roughly the right direction. I would be much more concerned if there were silence and nobody cared about what we are doing -- it would mean that we are "off in the weeds", as they say. Getting this kind of valuable feedback is one of the major benefits of launching early.
If you'd like to contribute (and I hope you do), I'd love to read more of your visions of "the perfect FriendFeed". Describe what would make FriendFeed perfect for YOU, and post it on your blog (or email post@posterous.com if you don't have a blog -- they create them automatically). Feel free to drop or change features in any way you like. Yes, technically you're doing my work for me, but it's mutually beneficial because we'll do our best to create a product that you like, and even if we don't, maybe someone else will (since the concepts are out there for everyone).
Saturday, January 3, 2009
The question is wrong
On "Coding Horror", Jeff Atwood asked this question:
He then argues that our intuition leads us to the "wrong" answer (50%) instead of the "correct" answer (2/3 or 67%).
However, the question does not include enough information to determine which of these answers is actually correct, so the only truly correct answer is, "I don't know" or "it depends". I skimmed though the comments on the post (there are about a million), and didn't see anyone addressing this issue (though someone probably did). They mostly argued about BG vs GB for some reason.
The reason that this question is wrong is because it doesn't specify the "algorithm" for posing the question.
If we assume that boys and girls are born with equal probability (50/50, like flipping a coin), then families with two children will have two girls 25% of the time, two boys 25% of the time, and a boy and a girl 50% of the time.
If the algorithm for posing the question is:
However, if the algorithm for posing the question was instead:
The problem with the question as originally posed was that it didn't specify which of these algorithms was being used. Were we arbitrarily told about the girl, or was a selective process applied?
By the way, if we're applying a selective process, then 100% is also a possibly correct answer, because at step two we could have eliminated all parents that don't have both a boy and a girl. Likewise, all other probabilities are also potentially correct depending on the algorithm applied.
Update: Surprisingly, some people are still thinking that my second algorithm yields 2/3 instead of 1/2 (see the confused discussion on news.yc). I think part of the reason is that I was somewhat imprecise with the concept of "elimination". The second algorithm does not eliminate any of the families, but if I announce that there is a boy, that does eliminate the possibility of two girls. This is where some people are getting lost and thinking that the boy+girl probability has become 2/3. The catch is that announcing the boy also reduced the boy+girl probability by an equal amount, so the result is still the same (it eliminated either BG or GB, I don't know which, but it doesn't matter).
Let's say, hypothetically speaking, you met someone who told you they had two children, and one of them is a girl. What are the odds that person has a boy and a girl?
He then argues that our intuition leads us to the "wrong" answer (50%) instead of the "correct" answer (2/3 or 67%).
However, the question does not include enough information to determine which of these answers is actually correct, so the only truly correct answer is, "I don't know" or "it depends". I skimmed though the comments on the post (there are about a million), and didn't see anyone addressing this issue (though someone probably did). They mostly argued about BG vs GB for some reason.
The reason that this question is wrong is because it doesn't specify the "algorithm" for posing the question.
If we assume that boys and girls are born with equal probability (50/50, like flipping a coin), then families with two children will have two girls 25% of the time, two boys 25% of the time, and a boy and a girl 50% of the time.
If the algorithm for posing the question is:
- Choose a random parent that has exactly two children
- If the parent has two boys, eliminate him and choose another random parent
- Ask about the odds that the parent has both a boy and a girl
However, if the algorithm for posing the question was instead:
- Choose a random parent that has exactly two children
- Arbitrarily announce the gender of one of the children
- Ask about the odds that the parent has both a boy and a girl
The problem with the question as originally posed was that it didn't specify which of these algorithms was being used. Were we arbitrarily told about the girl, or was a selective process applied?
By the way, if we're applying a selective process, then 100% is also a possibly correct answer, because at step two we could have eliminated all parents that don't have both a boy and a girl. Likewise, all other probabilities are also potentially correct depending on the algorithm applied.
Update: Surprisingly, some people are still thinking that my second algorithm yields 2/3 instead of 1/2 (see the confused discussion on news.yc). I think part of the reason is that I was somewhat imprecise with the concept of "elimination". The second algorithm does not eliminate any of the families, but if I announce that there is a boy, that does eliminate the possibility of two girls. This is where some people are getting lost and thinking that the boy+girl probability has become 2/3. The catch is that announcing the boy also reduced the boy+girl probability by an equal amount, so the result is still the same (it eliminated either BG or GB, I don't know which, but it doesn't matter).
Subscribe to:
Posts (Atom)