Sunday, September 20, 2009

Tahoe Weekend

I'm wrapping up a perfect mountain biking weekend with some old friends. We've been riding our brains out for three days up in Lake Tahoe. One of the guys used to own his own bike shop, so he became defacto group-support on the trail. What a luxury it is having a mechanic on the rides; the slightest mechanical issue gets resolved in an instant. He also turned out to be a great cook, so instead of constantly eating out, we had fabulous home cooked meals to start and end the days. You rule Buck.

About half of the group consisted of locals (Truckee, Tahoe Donner) that one of my friends befriended when he lived up here for a few years. The local connection is always priceless. The right trails, the right subtleties, the right everything, instead of fumbling around like a pure tourist. One of the group owns a restaurant in Reno ("Beuno Grill"), and his annual community party coincided with our trip so we partied there last night. Great people, great time.

The main rides:
One of my favorite things about riding a hard-tail bike is when I get mixed in with soft-tail riders. There's nothing like being told "you can't do this trail on that" by a softie, and then beating them to the finish. I did get to ride some ultra-plush custom Ventanna full-suspensions though. I have to say, really nice rides, but just too disconnected from the trail. I need to feel what's going on underneath me, not be shielded from it. That makes for a good segue into one of the cool cars that came up.

The car enthusiast in the group brought up his new tricked out BMW M5 (500hp). While a killer ride with amazing power, the entire system is "drive by wire." Driving it was almost confusing as I lacked direct connections to the brakes, gearbox, and throttle. I want to be directly connected to a car with that much power. While BMW had done an incredible job pulling it all together, it was obvious there was a computer between me and the road.

I've been on V-Brakes forever, but after riding some HOPE branded hydraulic disk brakes, I'm going to switch over. The stopping power is incredible, and my forearms could use a break. I'm going to stay hard-tail though; all hard... all the time.

Great weekend. Nice to catchup with old friends. Nice to grind out some epic rides.

Thursday, September 17, 2009

TechStars Pain Pattern

Image courtesy of Colin Sackett | Book design & publishing.


I recognize patterns; large and small; over short periods, and long. Remember those SAT questions to find patterns in obscure information? I owned those. After my 2nd year involved in TechStars, a pattern has squarely emerged. Almost all teams build something quickly based on some LAMP stack derivative. The teams that "succeed" (e.g. find users/usage) always start asking performance related "help" questions.

"The application is slowing down." "The site is slow when we do a campaign." etc.

I wind up digging into a team's stack and inevitably see three things that always account for the bulk, if not being exclusively, "the problem."
  1. Page loads cause complex SQL queries to be run. Don't do that. It's cute and easy when you don't have usage, but it won't work when you get traction. If you did it for expediencies' sake, fine, but plan on undoing it later. DB's can crater any good application. Cache the data your users want so they don't have to hit a DB for it. You can't always tear a DB out of the flow, but try, and try hard.
  2. The server does more processing than it needs to do. Use your users. They have powerful computers and browsers that can execute code very well. Shunt rendering of data/objects down to the browser via JS (or Flash), if this becomes a problem. Do you really need to parse that DOM on the server for every page-load to derive the list of things you want to show the user? No, let client-side JS do the parsing/sorting. This line is always fine however; overburdening the browser doesn't buy you much. Point is, find the balance. Your goal in life should be to serve flat files that require nothing more than Apache connecting an IP socket to a file descriptor to let the OS shuffle bits to and fro. Follow that path and good things will come. Flatten your stuff before a user accesses a page on your service, not when they do.
  3. Don't store images in a database. C'mon, seriously. Worst case store references to said images that live out on a disk (or CDN) somewhere. I'd argue you don't even need to store the references if you get your model right; your app should just be able to derive an image's location based on some pattern. Ahhh, full circle.
On a related note, at a local "CTO lunch" the other day, Todd Vernon recited an old blog post of his called "Scaling Your Startup". A few of some of these principles are reiterated there.

Friday, September 11, 2009

Can't scale? Be Sure To Blame Your Code.

Web app scale bottlenecks (software & hardware) have leapfrogged themselves a few times over the past 15 years, and I just came across a concise view of how software had to react once hardware/bandwidth reached a new speed/throughput tier back around the year 2000; http://www.kegel.com/c10k.html

The C10k problem, and solutions, document is a fabulous overview of the techniques used to eliminate software servers as a bottleneck. Thus, hardware became the bottleneck again.

If you can't meet your application's user demand it is because you haven't written the correct software to do so. You can try to blame the fact that you don't have enough hardware/bandwidth, but that problem is solved with the flick of a pen (money). Your challenge is solving your software problem. Enough components in the stack (from web servers, to application servers (custom if you have to) and data storage solutions) solve the C10k problem now, that any issues in your application software are likely yours. Find better tools/components, and/or write better code.

If you don't have the money to acquire the hardware you want, you might be working on the wrong thing.