Sunday, December 20, 2009

Gnip's 2009 Rocked My World




After spending 2008 building a phenomenal development team, early 2009 tested me like I'd never been tested before. Half of my career has been spent building/managing teams, and I thought I had it dialed in. Along the way, feedback from individuals (peers, bosses, employees), raises/bonuses, and promotions reinforced an apparent self delusion. At the end of Q1 I was met with the proverbial dousing of cold water in the face. I was confronted with a "Jud goes or I go" situation. My heart broke (it's better now and stronger than ever). I stayed.

Compromise
Prior to that confrontation, while bouncing issues/ideas back and forth with someone who's experience I highly regard and respect, they suggested that I was "overly principled." It was an interesting choice of words I thought. They felt like a gentle way of saying "you're stubborn" or "you're inflexible"; something like that. I used to think I was flexible on the things I should be flexible on, and passionate about the other set. However the entire scenario caused me to reflect and realize that in fact I was incredibly rigid on a few things I shouldn't have been rigid about. I've since changed my approach entirely when working with people (bosses, employees, peers) to one that starts with openness, and morphs from there. Call me soft, but this approach is baring delicious fruit. Have I sacrificed too much? So far no, but only time will tell, and I'm hopeful that the answer is "no."

Any Monkey Could Play
I used to pride myself exclusively on being able to a) identify great opportunities b) identify, and hire, great talent, and c) rinse-repeat. The theory went that if you only pick winning ideas, and winning people, you will win. My perspective was that if you set that up, then the rest is cake. When churning on this with a friend they suggested "dude, if that's the game, then any monkey could play." With my tail between my legs, I realized the challenge is in recovering from the mistakes (that inevitably get made), adapting to uncontrollable dynamics (even the best laid plans fail), and persevering. Identifying great opportunities and people to work with is only the beginning, and frankly, it's the easy part.

To Thine Own Self Be True
My family will tell you I don't have an empathetic bone in my body, and it's likely because I pour my empathy into my employees. Earlier this year I learned that I was out of balance with this arrangement. My "overly principled" stance led to being "overly emotionally" invested in certain people, approaches and dynamics. I'd always been this way on the job, it had worked exceedingly well. However, when running a business, this isn't priority #1. If you're managing people in a mid-large sized company it probably is priority #1, but when you're trying to start something from the ground up, enough shrapnel flies around that folks are going to get hurt. My takeaway here is that I have to shift my empathy around a bit. This is a tricky balance between doing the Right thing from a human/people standpoint, and doing the right thing for the business. It's a different calculation for each person. I'm adjusting mine.

Back to that Flexibility Point
My partner and I have big egos, yet they come out in wildly different ways. Our initial approach to building the product and company left a palpable struggle over control in the air. The struggle nearly destroyed the company. Him being ultimately responsible, as CEO, things came to the ultimate head and we firmly hit the reset button on the company (we let almost half of the company go, we changed the product direction, and technology stack, entirely). We should have done it earlier. The bravery he showed in making that decision was admirable, and it's a lesson I'll take with me forever. While the words went unspoken, it was time for me to back off of a schlew of points that I'd been firmly holding. The ego struggle had to yield or the company would die. That's behind us now, and neither one of us has to expend energy in that space anymore, and we get to focus our energy on things that matter; like building a phenomenal product that meets our customer's needs.

If you're in an early stage startup...
  • focus on building the product, not the company. if you succeed, you can focus on the latter. there's a chicken and egg challenge here that's fun to play with.
  • if you're doing the previous thing, then you can't handle everyone with soft and fuzzy kid gloves. empathize with people, but be clear and concise around what's working and what's not.
  • be wary of too many cooks in the kitchen. I used to think that I'd rather deal with the challenge of managing multiple egos and blustering, but no more. I'd rather have one rooster on the team. As always this isn't perfectly cut and dry, but...
  • employees are happy to tell you what's wrong with your company, but will rarely, ever?, tell you what's wrong with you.
  • trust your gut; always.
  • do your best, and don't worry about the people around you. a year ago Esquire magazine interviewed Client Eastwood and he spewed this gem "As you get older, you're not afraid of doubt. Doubt isn't running the show. You take out all the self-agonizing."
This whole post feels cliche and as though I should have known all of this; heck I've probably been preaching many of these things. Always funny how life twists and turns.

Work hard.

Sunday, December 13, 2009

Chaning My Computer Interface

For a few weeks now I've been using fingerprint readers to login to my computers. Mostly an experiment to see how good, or bad, the technology has become over the past several years, it's turned into my preference for logging into my machines and accessing sensitive information.

The basic idea was to minimize the number of times per day that I have to type my 13 character password. Across my machines I estimate that I get asked for my password 36 times per day (not including websites, but that's a different post). I now respond to at least 24 of those requests with a finger swipe. The remaining requests still require password typing as those requests are to unlock my Apple keychain which the software I'm using (Upek Protector Suite for Mac) doesn't support yet (other than one-time global unlocking which I don't want to enable).

The reader accuracy is effectively 99%, so recognition, which I thought would be an issue, is a forgone conclusion these days; a non-issue.

I'm left wondering why all machines don't integrate (via mouse, keyboard, or body) fingerprint readers by default. Then again, I'm also the guy wondering why all machines don't come standard with retinal scanners. I know the answer to both wishes, but... dare to dream.

It looks like the Protector Suite software supports windows machines 10x better than it does OSX, but it's enough of a step up in user experience for me that I'm sticking with it.

A notable bug is that manual password override doesn't work when bring the machine out of sleep mode when the USB reader is NOT plugged in. Put another way, you have to always have the reader plugged in when moving in/out of sleep mode. This is a major annoyance when using a laptop as it means you have to drag the thumbdrive sized device along with you. The workaround is to cold-boot the machine and potentially lose data in the process. Upek blames Apple's Snow Leopard release, and is waiting for them to resolve the issue, which I suspect will never happen.

Feature Requests:
  • Firefox password manager support for Mac.
  • Background swipe monitoring for fast user switching. If I walk up to my home machine, which has 4 different user accounts on it, and the machine is logged into userA (I'm userB), I want to just swipe my finger and have it automatically switch me to userB.
  • Individual keychain access request unlock swipe support.
  • Physical hardware integration with all the hardware I use. iPhones, starting my car, all my machines, etc. The readers have to be all but free to manufacture anymore, so I'd like to see them as ubiquitous as built-in web cams please. Thanks.
My configs:
  • All machines running Apple OSX Snow Leopard
  • 13" Macbook Pro laptop
  • 21" iMac
  • 13" Macbook laptop
  • One Upek mini/portable/thumb-drive sized USB reader
  • One Upek larger desktop based USB reader

Saturday, November 21, 2009

IP Address Brokers; Please Stand Up

I've been trying to formulate some clear thinking for awhile now regarding a major challenge on the network today. It's been brewing for years, and the proliferation of API usage is boiling it over. The power of open discussion is giving the topic some structure and vocabulary, finally, and I'll try to take a step further with this post. Rinse... repeat.

I spoke at Boulder's CTO lunch earlier this week and we got around to talking about the importance of IP addresses/namespaces/blocks in today's API economy. Josh Fraser (attended the lunch as well) does a great job distilling much of the thinking in his recent blog post.

History has taught us that control of information is a powerful economic motivator. Governments and economies rise and fall when regulatory constraints affect how groups of people, or individuals, access to the flow of information/goods. The control may be fueled by greed or survival, but somehow choke points always make their way into the system. APIs in high demand either put business controls on access, or technical controls in place in order to keep backend software running. These controls ultimately boil down to the only entity that is immutable by the time it arrives at a computer system's gateway; the IP address.

IP addresses are discrete and categorized. They are the single unit that can be perfectly controlled at the very edge of your network. Using them, your system can easily determine who to let in, and who to keep out on an individual basis, or by grouping "all requests coming from company X."

The advent of cloud computing allows developers to build applications across large IP address blocks owned by someone else (e.g. Amazon). I blogged about this potential fatal flaw a few months ago. This is a boon for developers, and abusers alike, and it's the latter bucket of individuals that will change the way IP addresses are used for API access. This will be the second coming IP addresses onto the field. The first major control ecosystem for IP addresses came from email and email spam. This new tier will come from APIs, and abuse/spam of them.

For months I'd been trying to figure out how to virtualize the IP address itself. In Ruby land, Heroku has pushed IP address allocation so far up into the stratosphere, that it's the closest thing I've seen to getting the IP address completely out of my way. It's only problem is that I still have to have knowledge of the block of IPs from which it draws.

As a developer I don't want to have to think about the IP address from which my software is making requests. I don't want to know if it's "clean" or "dirty" from the POV of the service I'm trying to access. I just want to access the service, within the bounds of its ToS. However, with cloud computing, I may be being punished because the IP address I'm using may be in a block of addresses that the service provider has "blacklisted" and constrained access to. This is bad and I currently have to spend time thinking about it, and working around it, much like legitimate email senders had to do yesterday. Today however, they can pay an intermediary to ensure the email gets through. I want an intermediary to ensure my API calls will get through.

Industries Born
In the coming months we're going to see router manufacturers make plenty of money providing more configurable IP routing/blocking/management solutions built directly into their firmware. Companies have productized their APIs, and ops teams are going to need easy solutions to managing the IP addresses accessing those APIs.

More significantly, we're going to see IP address brokerages emerge for APIs just as we did for email. Hundreds of millions of dollars are spent each year to ensure email gets through. Email brokerage is a big business, and I'd like to see those firms provide API brokerage as well (hint hint SendGrid).

Saturday, November 7, 2009

Isolated Collaboration



I periodically checkout the Mozilla Add-ons site to see what's new. I just grabbed Reframe-it given that a decentralized client, non-publishing platform specific, commenting model only makes sense. Sadly, no-one's using it; the sites I visited didn't have others commenting on content. This reminded me of the me.dium (now oneriot) sidebar I worked on years ago. Again, another solid decentralized collaborative client (centralized server) product idea, that consumers wouldn't consume. Again, sad.

As consumers why do we continue to do it the wrong way? Relying on publishers to install walled garden user feedback/comment models (IntenseDebate, disquss, etc) is a disaster. When does commenting/discussion break out into its own client/server/protocol (obviously reuse something on that front) so I don't have to rely on the Publisher to have done something w/ their site to support feedback loops. I wonder if Google Wave will gain enough momentum to cause real change here.

The industry is heavily weighted toward open protocols/standards that allow services to cross-communicate (oauth, openid, come to mind). While a good thing, I think we've swung too far to the server-side in this regard. Look at it this way, sans browsers (clients), we've got nothing, yet we're trying to build something (collaboration) based on servers/publishing platforms that have walls between them. That seems horribly broken to me. The model is ripe to have clients (mind you, potentially server-side operated clients... not strictly client-side-software... though in this case I suspect client-side is opportune) act as collaborative agents in order to get a sense of community across the network (not just within various walled gardens) at large.

I don't buy that customers don't want this; lots of people comment on blogs. It falls squarely into the "we don't know what's in our own best interest" bucket. It's going to take a consortium of the major publishing players (from news sites, media companies, blog services) to buy-in to a standard (ha!), or the client (browser) is going to have to force the UI/UX onto us; optional plugins/addons won't cut it.

The network is desperate for a revolution on several fronts. I'm hopeful there's enough steam in the collective invention engine to make some stuff happen. While I've enjoyed living through the advent of the Internet, I don't want it to have been the only major societal shift in my lifetime.

Tuesday, October 20, 2009

Like Father, Like Son




Ahhh parenting.

Earlier this evening my wife and I returned from our seven-year-old's first parent-teacher conference as a first grader. We'd done a few when he was in kindergarten, but those don't really count as they're just too young to derive much from the discussion.

This one was hard.

I never fit in growing up. I probably still don't today, but as we grow older there are fewer and fewer clicks for folks to actually fit into, so things tend to find natural balance. At first glance it's looking like my son won't fit the general profile either. I feel bad for my wife as, if these early signs are a harbinger, she's not ready for what lies ahead in the years to come.

You see, I can be a tad pedantic, and my son is trending the same way. I've always been this way, and particularly in the early years, it really gets in the way. When other kids were blissfully unaware of what was going on around them, I was obsessed with my surroundings; people, authority (real & imagined), culture, structure, music, science, how the world works, etc. The challenge is that such an outlook on life doesn't fit the mold, so there is constant tension, confusion, and misunderstanding amongst the players.

The education system deemed me "too serious," "angry," "depressed," "anti-social," and on and on and on. As a result, I was always the odd man out. The models didn't work for me, so I got squeezed out to the sidelines while the majority gleefully marched along.

After some big bumps in the road, I finally figured out the game and realized it was easier to play along with everyone else for a few more years, then I'd be off to college and able to define my world the way I wanted it to be. I'd get to leave the constricted frameworks behind. All of that has worked out pretty well, though there are still plenty of constructs I don't fit in. I've learned how to adapt/adjust however, and life is good.

We talked with the teacher for awhile about some of these notions, and I imagined my parents' first parent-teacher conference with my first grade teacher. I'm sure it went the exact same way. My parents screwed up plenty while rearing me and my brother (we'll obviously do the same with our children), but one thing they did that blows me away to this day is they loved me, believed in me, and fought for me for over a decade of true mayhem.

While I hope we're just in yet another proverbial phase, in case we're not, I look forward to leading my son with the same love, perseverance, dedication and faith that my parents did with me.

To my wife, if we have to walk down this path, while it'll be petrifying, don't worry, I've walked down it, I know which turns to take, and amazing things are at the other end.

To my son (if you ever wind up reading this), I love who you are. I love knowing we see the world through similar eyes. We experience life in the same way, and that bond is truly amazing; we'll share it forever in ways few ever do. Embrace who you are and enjoy it. Being the black-sheep yields greatness. I'll always be by your side.

Friday, October 16, 2009

Nokogiri Performance: xpath vs. tree walking/iterating

At Gnip we're doing some heavy XML parsing in Ruby and obviously chose Nokogiri as the underlying engine. I started the week doing xpath searches to tease out the elements/attributes from the documents we were parsing, and ended the week iterating over the root's children in order to achieve significant (~3x) performance improvements. While xpath is convenient, as you don't have to pay attention to document structure (assuming you have your head around xpath syntax to begin with), it's horribly expensive in terms of processing time. It's truly searching the document for what you want; expensive!

There's a big difference between using the "search" (e.g. xpath) interface on top of a parsed DOM and running over the entire tree, testing each node for what it is you're looking for. Code gets a little uglier when doing the latter, it's not as elegant/clean, but performance starts kicking in when you do it. Moving from xpath search-style parsing, to tree walking yielded ~3x performance improvement in parsing for me. I suspect that going all the way to either Nokogiri's Reader or SAX interface would yield an additional 10% improvement over that. However, I'm stoping here for now as the readability/complexity detriment in doing a full Reader/SAX stack-maintenance model doesn't feel worth it at the moment. I would like to try pauldix's SAX machine declarative parser (built on Nokogiri) and run some benchmarks, but... another day.

Old:

doc = Nokogiri::XML.parse(some_xml_string) { |cfg| cfg.noblanks }

doc.xpath('/xmlns:feed/xmlns:entry').each do |entry|
at = entry.xpath('xmlns:published').text
id = entry.xpath('xmlns:id').text
...
end


New:

doc = Nokogiri::XML.parse(some_xml_string) { |cfg| cfg.noblanks }

doc.root.children.each do |node|
if node.name == 'entry'
node.children.each do |sub_e|
at = sub_e.inner_text if sub_e.name == 'published'
id = sub_e.inner_text if sub_e.name == 'id'
end
end
...
end

Sunday, September 20, 2009

Tahoe Weekend

I'm wrapping up a perfect mountain biking weekend with some old friends. We've been riding our brains out for three days up in Lake Tahoe. One of the guys used to own his own bike shop, so he became defacto group-support on the trail. What a luxury it is having a mechanic on the rides; the slightest mechanical issue gets resolved in an instant. He also turned out to be a great cook, so instead of constantly eating out, we had fabulous home cooked meals to start and end the days. You rule Buck.

About half of the group consisted of locals (Truckee, Tahoe Donner) that one of my friends befriended when he lived up here for a few years. The local connection is always priceless. The right trails, the right subtleties, the right everything, instead of fumbling around like a pure tourist. One of the group owns a restaurant in Reno ("Beuno Grill"), and his annual community party coincided with our trip so we partied there last night. Great people, great time.

The main rides:
One of my favorite things about riding a hard-tail bike is when I get mixed in with soft-tail riders. There's nothing like being told "you can't do this trail on that" by a softie, and then beating them to the finish. I did get to ride some ultra-plush custom Ventanna full-suspensions though. I have to say, really nice rides, but just too disconnected from the trail. I need to feel what's going on underneath me, not be shielded from it. That makes for a good segue into one of the cool cars that came up.

The car enthusiast in the group brought up his new tricked out BMW M5 (500hp). While a killer ride with amazing power, the entire system is "drive by wire." Driving it was almost confusing as I lacked direct connections to the brakes, gearbox, and throttle. I want to be directly connected to a car with that much power. While BMW had done an incredible job pulling it all together, it was obvious there was a computer between me and the road.

I've been on V-Brakes forever, but after riding some HOPE branded hydraulic disk brakes, I'm going to switch over. The stopping power is incredible, and my forearms could use a break. I'm going to stay hard-tail though; all hard... all the time.

Great weekend. Nice to catchup with old friends. Nice to grind out some epic rides.

Thursday, September 17, 2009

TechStars Pain Pattern

Image courtesy of Colin Sackett | Book design & publishing.


I recognize patterns; large and small; over short periods, and long. Remember those SAT questions to find patterns in obscure information? I owned those. After my 2nd year involved in TechStars, a pattern has squarely emerged. Almost all teams build something quickly based on some LAMP stack derivative. The teams that "succeed" (e.g. find users/usage) always start asking performance related "help" questions.

"The application is slowing down." "The site is slow when we do a campaign." etc.

I wind up digging into a team's stack and inevitably see three things that always account for the bulk, if not being exclusively, "the problem."
  1. Page loads cause complex SQL queries to be run. Don't do that. It's cute and easy when you don't have usage, but it won't work when you get traction. If you did it for expediencies' sake, fine, but plan on undoing it later. DB's can crater any good application. Cache the data your users want so they don't have to hit a DB for it. You can't always tear a DB out of the flow, but try, and try hard.
  2. The server does more processing than it needs to do. Use your users. They have powerful computers and browsers that can execute code very well. Shunt rendering of data/objects down to the browser via JS (or Flash), if this becomes a problem. Do you really need to parse that DOM on the server for every page-load to derive the list of things you want to show the user? No, let client-side JS do the parsing/sorting. This line is always fine however; overburdening the browser doesn't buy you much. Point is, find the balance. Your goal in life should be to serve flat files that require nothing more than Apache connecting an IP socket to a file descriptor to let the OS shuffle bits to and fro. Follow that path and good things will come. Flatten your stuff before a user accesses a page on your service, not when they do.
  3. Don't store images in a database. C'mon, seriously. Worst case store references to said images that live out on a disk (or CDN) somewhere. I'd argue you don't even need to store the references if you get your model right; your app should just be able to derive an image's location based on some pattern. Ahhh, full circle.
On a related note, at a local "CTO lunch" the other day, Todd Vernon recited an old blog post of his called "Scaling Your Startup". A few of some of these principles are reiterated there.

Friday, September 11, 2009

Can't scale? Be Sure To Blame Your Code.

Web app scale bottlenecks (software & hardware) have leapfrogged themselves a few times over the past 15 years, and I just came across a concise view of how software had to react once hardware/bandwidth reached a new speed/throughput tier back around the year 2000; http://www.kegel.com/c10k.html

The C10k problem, and solutions, document is a fabulous overview of the techniques used to eliminate software servers as a bottleneck. Thus, hardware became the bottleneck again.

If you can't meet your application's user demand it is because you haven't written the correct software to do so. You can try to blame the fact that you don't have enough hardware/bandwidth, but that problem is solved with the flick of a pen (money). Your challenge is solving your software problem. Enough components in the stack (from web servers, to application servers (custom if you have to) and data storage solutions) solve the C10k problem now, that any issues in your application software are likely yours. Find better tools/components, and/or write better code.

If you don't have the money to acquire the hardware you want, you might be working on the wrong thing.

Monday, August 24, 2009

Fatal Flaw in Cloud Based Social Media Apps?

For the purposes of this post I'm going to define "cloud computing" as a service that allows you to easily run your application on someone elses' infrastructure, and importantly, within their IP address block range.

As "web app" web API usage continues to grow, the significance of the long forgotten IP address as a fundamental application stack component grows along with it.

Public facing web APIs are accessible via publicly facing IP addresses, by applications running on other publicly facing IP addresses. When the demand for a polled web API outstrips supply (artificially imposed or otherwise), engineers throttle access to said API based on the consumer's IP address. The consuming IP address is the only guaranteed uniquely identifiable attribute of a "web app." Therefore, it is the one thing that can guarantee the rate-limiting of a resource.

This was all well and good when standing up your web application in an environment with its own IP address was relatively difficult. However, with the advent of Google App Engine and Amazon Ec2 "clouds," "web app" deployment is trivial. The result is a lot of software utilizing a relatively constrained resource; the IP address.

The overuse and abuse of web APIs is well documented and many services have been brought to their knees as a result. Operations teams have had to fall back on age-old IP address blocking techniques in order to protect themselves. Unfortunately, for cloud computing services, this means their IP address blocks/ranges are often black-listed, which leaves legitimate web applications built on top of them out in the cold.

The historical parallel to this kind of black-holing of IP address ranges goes back to ISPs restricting email from servers which would consistently allow spam through their gateways. Any email provider of size has black-lists of known wrong-doing IP address blocks, to ensure their system's (and their user's) aren't crushed by the onslaught of spam. While a reasonable model for blocking spam, the thought of the same model being used to control web application innovation is frightening.

That said, the market will decide whether or not the IP address will remain the canonical access regulator. In many ways any other version of the future is incomprehensible, but now that the IP address has become such a fundamental part of an application developer's daily life, I have to think a new construct for cross-machine communication will evolve.

Cloud providers can either live with the fact that their IP address blocks are being limited, and have web API leveraging "web app" developers move away from their platforms, or they can work to find a solution. Unfortunately for them, their entire frameworks currently revolve around relatively contiguous IP address blocks, and changing that isn't easy given their operating scale.

As with any constrained resource, developers will work hard to obtain it, and web APIs are no exception.

Monday, August 17, 2009

Dirty Data, Python & Gnip

I've been tinkering with Python (2.5) over the past few months; both in Google App Engine, and in free-running apps/processes. My initial free-running apps ingested "structured" content from a variety of web APIs, and would crash roughly every 12 hours, and would need to be restarted. Subsequently I took a "short-lived" process approach to managing these apps; cron would monitor them to make sure they were still running, then restart them if they'd died.

More recently I built an app that digested data from a known clean API (Gnip). Digesting data from Gnip ensures consistency in data format and structure. As a result, the Python app has been running for several days without issue. Now, of course the initial app crashing was due to bugs in code I wrote, but bulletproofing against broken/dirty/poorly-encoded/inconsistently-encoded data coming from random web APIs is a pain. Covering every case in modern apps takes a lot of energy. I opted for the "bounce it" strategy rather than to debug the issue (a major time sync due to variability and inconsistency; any engineer's worst nightmare).

The new application has instilled faith in Python as a choice for long-lived app processes, and reinforced how important clean input data is.

Wednesday, July 8, 2009

Sarcasm and Sentiment Analysis

Sentiment analysis of digitized content (tweets, email, blog posts, etc) is hard. Sarcasm makes it even harder. Consider how many sarcastic comments are made in our online communications each day. "I love being delayed at the airport." "I can't stand it when everything is going my way." etc. Analyzing text like that has got to throw even the best sentiment analysis engines for a loop, and the false positives start flying.

If you're sarcastic, like me, you've learned to keep your sarcasm to a minimum when you're writing because the context just isn't there for your reader, much less a machine, to understand the subtle shifts in tone or where you're coming from.

I'm looking forward to sentiment deduction getting better, but I'd like to see how the logic evolves to understand age-old sarcasm.

Maybe we will all just stop being sarcastic to support the machines running our lives.

Thursday, July 2, 2009

Speed Date with Google App Engine

I assumed that in the months since the announcement of Google App Engine, that its glaring HTTP client deficiencies would have been resolved. Nope.

Any modern platform needs a robust HTTP client (timeout controls, full method support, custom headers, compression support, authentication support, and redirect handling). Unfortunately, GAE's urlfetch client (which the standard Python HTTP clients all funnel down to) doesn't let you tweak various headers (including Referer). Nor can you customize the connection timeouts. Both of these tweaks are tools of the modern day web services programming trade. Subsequently, I have to cast GAE to the tinker toy pile with the rest of today's high-level web apps. A quick look at the app repository proves this out sadly.

On the other hand, take a look at what's been built on Amazon's AWS. Goes to show what you can do when you have an open platform.

That said, GAE does show promise for hosting simple user facing web applications, or offline data crunching/hosting apps (ala google.com) with quick user facing response times and little reliance on the outside web.

Friday, June 26, 2009

TechStars Boulder 2009: Half-Time Report

The other night I attended the first "pitch practice" for this year's set of TechStars Boulder companies. Watching the evolution is always fascinating. So far, my first impressions are holding strong. Those who I suspected would struggle, are struggling. Those who I expected would be knocking it out of the park are doing so.

Comparing and contrasting the 2009 crew to the 2008 crew, I find that this year's companies are, on the whole, more mature than last year's. More companies this year have their products further down the path of where I think they will ultimately end up, than last year. It's been nice to work with teams that are more crystallized in their thinking and implementations. Of course, there's always the crew that bounces back and forth for awhile until they hone things to the point they can walk down a straight line.

For those doing user facing products, the focus on the concepts that will "hook" a user is much better than last year. Too many priorities tend to doom a team, and recognition of this is coming fast. That said, there's a big difference between knowing which features to drop and actually dropping them; it can be hard to let go.

For those doing more infrastructure intense products, the technical skills brought to bare, and understanding of the issues at hand, is much more advanced than last year. The infrastructure plays have a special place in my heart so it's been fun to work with folks doing more backend stuff.

Of course, there's a star burning hotter than the others. This team has taken a problem that billions of dollars have been thrown at, to little avail, and turned it upside down. As a result, they have a phenomenal product that is going to do things for an industry that has been begging for it for decades. Brilliant, and totally cool. I can't say who it is, but it will be apparent when the season's over.

Some technical patterns/themes that pervade almost every team this year:
- Polling. Mashing APIs together is the norm now, and the access paradigm overly leveraged is polling. Conveniently my company Gnip (http://gnip.com) is trying to make this easier.
- Queuing. Polling's ugly sibling. More teams are challenged with queuing needs in their application which bumps complexity up a notch. The simplest advice is best here. Queuing Theory 101: if the average inbound rate of items is greater than the system's ability to digest them; you're screwed, rethink the model.
- Data Storage. "How am I going to store all that data in an access efficient manner?" The inbred offspring of Polling and Queuing, data storage challenges are real for a few of the companies. For the others, the age old simple relational DB model will foot the bill.

One thing that will never cease to amaze me is the energy, passion, and commitment that radiates from the teams. Amazing.

Boulder is lucky to have this program, and I'm lucky to be a part of it.

Thursday, June 18, 2009

Faithfully Breaking Rules

Spending a week on a 15k person island (Martha's Vineyard) with family has made me think about breaking the rules in more ways than one. Reading the local paper this morning reminded me of how important it can be to break the rules. One of the bakeries in Oak Bluffs opens their back alley door at 10:30pm every night to sell doughnuts as they're coming off the line; all night until 7am. I'm sure they're breaking numerous zoning and health code rules in the process, but needless to say with a population of this size, everyone loves it, and no-one cares; no harm no foul.

The "family" aspect of this vacation has me bending/breaking, and enforcing, numerous parenting rules as well. Ice cream everyday? No problem. Licorice before breakfast? Sure.

Reading about President Obama's Finance industry reworking got me thinking about "bigger" rules that affect our everyday lives, indirectly and directly. That turned me to one of my favorite, and brutally simple, rules that we, internationally and cross-culturally, effectively never break: "stay on your side of the road when driving."

Think about it. Everyday millions of people drive two-thousand pound chunks of metal at high-speeds in opposite directions, with nothing more than a couple of feet between them as they pass eachother. There is some base rule that taps into our mortality that truly prevents us from breaking this rule. We have faith that complete strangers will adhere to the rule as well. We hand our lives over to other drivers everyday. I always like coming back to that one as it's an interesting exercise regarding faith in others.

Photo by: William C. Beall of Washington Daily News

Monday, June 1, 2009

"Mommamacations" & Perfect Software

My 6.5 year old son and I built a Lego Mindstorm vehicle yesterday. After constructing it, we wrote the software for it. After watching version 1.0 of our software run for about 5 seconds, we noticed a bug so we iterated, fixed the bug, and ran v2 of the software. After about 30 seconds we noticed another issue with the number of degrees the vehicle was turning when it confronted an obstacle. We tuned the software to increase the angle to 90 degrees, compiled, pushed code to the vehicle, and ran it.

This version, v3, of the software ran for awhile. It ran at home, at his grandparents house, and again this morning. It ran well, for a long time. However, a few minutes ago we found yet another refinement we could make to the turning angle to make it get out of a jam even faster, and I said "aha, I found another modification we can make!" My son replied, "let's make all of the mommamacations [sic] this time." He wanted to write the software once, without bugs, perfectly.

I went on to explain how it takes time to understand how software is going to work in the real-world and how you can't account for all of the variables and scenarios up front. As a result, you build, test, and refine; you iterate. You can't write it once and have it work perfectly forever.

He didn't fully grok it, but its starting to sink in. It was a neat interaction with my boy around what my world is all about. Ha! My daughter just yelled out "am I doing ballet today?" Gotta run.

Sunday, May 31, 2009

Google I/O: My Impressions

punch card picPhoto from Matthias Schicker's post.

I attended last week's Google I/O 2009 conference in San Francisco. Here's what struck me.

HTML 5
Some friends and I debated what the punchline of the show was over dinner one night, but for me, it was HTML 5; the browser. The entire introduction sequence was about JavaScript execution, rendering speeds, & HTML 5 standards. The five minute automated intro demo before the keynote demonstrated web browser functionality, literally, using a web browser for everything. This theme was downright entertaining. HTML 5 is the distillation of everything we've wanted/needed in the browser over the past decade. What's particularly funny about HTML 5, and in Google's all-hands-on-deck push for its implementation across the board, is that, without exception, all the relevant portions were knocked together (in prototype form at a minimum) a decade ago between Netscape/Mozilla, IE, and Opera. The entire two days felt like a browser resurgence.

What I liked about this was that Google had the gumption, and obviously the money, to make something old, new again. If you ever spent time working on one of the major browsers, you too see the world through HTTP/URL/JavaScript lenses. Those technologies unlock everything. It was cool and fun to be part of a conference dedicated to these concepts.

Wave
I left about of a third of the way through the Wave introduction. Again, 10-year old communication/message-threading concepts being demonstrated in front of a technical audience of four-thousand. My initial reaction was, yawn. I've always loved the notion of treating messaging more centralized (in a logical sense) both from a backend protocol/storage standpoint, as well as from a UI perspective. Naturally flowing between an asynchronous email conversation, and a synchronous IM conversation, will be a beautiful thing when we finally get there. However, it felt awkward having one of Google's top three themes be Wave.

Architecturally Wave appears to be able to get us there, however standing up Wave providers en-masse will take a long time (consider how long it took for SMTP/POP/IMAP to proliferate).

I'm particularly excited about Wave's leverage of XMPP (with extensions) as the connection/protocol model, though; feels very fitting. Furthermore, Cisco's acquisition of Jabber last year is feeling like a sweet decision right about now. Imagine Cisco's XMPP routers hardened for Wave Providers; nice dovetail.

Android
Google handed out four thousand Android/HTC mobile's in hopes of spurring Android development. I've gone so far as to pull down the SDK and do some dev "how-to" reading, but I've gotten distracted and have moved on. There are three fatal flaws with Android and the HTC device.
  • The soft-keyboard is too small which makes it very hard to type. This is purely a function of the device/form-factor which can/will change over time.
  • No iTunes/iPod. There's a media player, but my world is painted in iTunes (for better or worse) and it's already a sync'ing nightmare so I'm not about to add another framework into the mix. My "phone" and music/video are on one device (iPhone) and I can never go back to multi devices. If anything, the iPhone has replaced my laptop as well on an occasional business trip.
  • The browser is all but useless. This shocked me, but the UI metaphors (which I'm sure some of which Apple has patented) on the iPhone Safari browser are so well done, that anything less on such a small form factor is a huge step backward.

Joseph Smarr on "The Social Web"
Smarr's always good to watch/hear. He understands the high-level yet always has his hands dirty with the actual hand's on implementation. He underscored how much things have changed with respect to OpenID and OAuth adoption over the past 12 months. Very true, and great to see. He mentioned Gnip, and Plaxo's integration points with it, which was much appreciated; thanks Joseph!

"Spelly"
One of the tracks was about "Spelly"; notably the server-side spell checker used in the Wave demos. What was so cool about this spell checker is that it was backed not by a dictionary, but the statistical probability (language independent) of a word being spelled correctly based on its position relative to the surrounding words. For example "Let's met tomorrow" slammed against the corpus of indexed web documents illustrates that the vast majority of the time words starting with 'm', between "Let's" and "tomorrow" are spelled "meet", not "met." So cool!

App Engine + Java

Google's hamstrung Java to about the same degree they did Python in App Engine. If you're a high-level Java hacker you might have fun, otherwise this was a solid miss (at least for now).

Sunday, May 17, 2009

Boulder & California


While playing with my daughter this morning she pulled out the key-chain pictured here. I noticed that the "Boulder" sticker was sitting on top of another. I peeled it back and found "California" underneath. Growing up in Boulder, spending four years in Silicon Valley, then moving back to Boulder to help build and grow our software/technology sector, caused me to view the picture through several lenses.
  • California is passe and products/companies/people are re-branding themselves as Boulder which is trendy.
  • The key-chain manufacturer decided to put California stickers on the keychains as they came off the line, then localize the keychains on-demand and in smaller batches when needed, in order to save cost.
  • Some of Boulder's entrepreneurship is really California underneath.
  • Some of California's entrepreneurship is really Boulder underneath.
  • The Californication of Boulder is real.
  • The migration of Californian's to Boulder continues.
A few friends of mine have written some interesting pieces on Boulder along these lines.
While I'm here, I'll plug what's turned into a phenomenal entrepreneurship and technology incubator; TechStars.

Thursday, May 14, 2009

Pair Programming & "top notch programmers"

Gnip is hiring again so the flood of recommendations/resumes/suggestions has begun. Gnip's a pair programming shop which means there are two developers for every CPU in the office, and two developers sit side by side, day in and day out. Pair programming isn't for everyone, and many simply aren't cut out for it. When hiring for a pairing position, digesting recommendations of "rockstar" or "top notch programmers" as friends/colleagues forward resumes along is often like trying to fit a square peg into a round hole. As an aside, checkout my previous posting on rejection and our hiring process.

The majority of developers come from non-pairing backgrounds, and therefore the good candidates have learned how to build amazing code alone. They've built a reputation for being a rock star while sitting at a desk by themselves', knocking out software. It's hard rejecting incredible engineering talent because an individual doesn't play well in a pairing environment, but it's something we have to do at Gnip.

If you know a killer developer who you're thinking of pointing at Gnip, please consider their pair programming passion/tendencies before sending them our way. If neither exist, they're just not going to fit into our culture and process regardless of how talented they are.

Wednesday, April 22, 2009

State of Boulder Dinning

Boulder's at an interesting juncture when it comes to it's restaurant offerings. If you adhere to the notion that opening a restaurant takes at least 12 months of planning, our new crop of joints are the brainchildren of plans concocted prior to the economic disaster that unwound over the past year. What follows is some speculation around the new spots in town, and some perspective on some of the staples.

The New Guys
Two major components have shifted since the inception of Happy Noodle House, Terra, Full Belly, and Arugula; the small/mid-tier banks that lend to small businesses (e.g. these restaurants) have stopped lending, and diners aren't as fast and loose as they were a year ago.

Happy Noodle House
I'm loving Happy Noodle House these days, though the nearly 100% "community table" (aka "European seating") is taking some community adjustment (both on the part of the staff, as well as the customers). Their price point, quality, location, the fact that they're fully operational, and connection to Dave Query (nearly bulletproof), have me bullish on their longevity.

Terra
I'm highly skeptical of Terra's plan. The location is choice, the chef's got promise, but the build out is expensive, and the price point is likely to be mid-to-upper tier as well. They'll get pit against the ever successful Kitchen which ought to be an interesting battle. I don't think Boulder can support two "Kitchens" effectively adjacent to one another. That said, Kitchen's offering became overly consistent a couple of years ago for me. Every time I go, I know what my food's going to taste like, which leads me to believe the consistency in low-level flavors (be it the olive oil or butter used in *everything*) which contributed to their success, is making the experience too predictable anymore. Terra may be able to siphon off bored Kitchen customers.

Full Belly + Arugula
As for Full Belly and Arugula, they're so far off the beaten path for me (my tiny world is confined to downtown) that I still haven't eaten at either. Assuming they're both delicious, Laudisio proved the location doesn't work, especially with two upstarts being adjacent. One will fail, the other will limp along.

Jax Fishouse
As for Jax Fishhouse. I'm stoked for Hosea for the Top Chef win. We've religiously gone to Jax every Thursday night for over a decade, but... the Top Chef status stuff has since ruined the experience for us. Dishes are bigger (see my post on trying to get dishes to be smaller), crowds (tourists) swamp the joint now, and the soul in the food is gone. We've stopped going, and will pick it back up in 6-12 months, when hype has died down, to see if things get back to normal.

Frasca
Frasca gets its own section. The food remains impeccable and the staff remains committed and top notch. The location remains unfortunate. I'm hopeful they find a new location soon. I'm bearish on the 9th & Pearl development actually happening which would leave them at 18th & Pearl for the foreseeable future. I can live with that however. I go to Frasca to be surrounded by people (staff & patrons) who care about food and wine, and to have the best of both served by professionals.

Brasserie Ten-Ten
This is the ringer in the bunch. They've nailed it! Brunch, lunch, dinner, happy hour... you name it. The restaurant acclimates to the time of day, and day of the week, incredibly well. Food is consistent and flavorful, and the wine list is broad and deep. A few years ago Ten-Ten was highly suspect to me, but now I can bank on it anytime for a good experience. The build-out is quality as well.

There you have it. My insomnia fueled blog post on my view of restaurants in Boulder at the moment.

Friday, March 27, 2009

Twitter: Niche Advertising That Actually Works

Advertising is all about getting information about a company/product/service in front of a demographically appropriate audience to tease them into spending money on said product/service.

When everyone used to spend most of their time in front of a live broadcast television, television ads were used to convey a product/service to general consumers. Then the internet and Tivo hit; destroying the Television ad market. Internet ads are all but useless now as well (Google will one day have to own up to the click-fraud reality that underpins its financials). So, I'm left adrift in a free-market economy without a way for advertisers to reach me. Bummer.

Twitter is changing this. The companies/products/services I care about have figured out how to get to me, via Twitter.

Twitter provides a broadcast system system that smart businesses are using to reach out to relevant people. I don't have to waste time consuming ads I don't care about. Instead I get to selectively choose which businesses I want to hear from when some new product/service/discount/sale becomes available. Finally, the consumer is in control, and the advertiser doesn't have to mess with subscription lists (email/snail-mail/phone numbers) to reach me. All I have to do is "follow"/listen to the "channels" I want, and if I end up not liking the company doing the advertising, I can turn them off.

Feels darn close to a perfect advertisement communication system to me.

For example, local companies I "follow" who tell me about real-time services they provide are listed below. They get advertisements in-front of me, when I want them, and I make real-time purchasing decisions as a result.
  • https://twitter.com/TeeandCakes (just bought doughnuts from there because they tweeted that they had arrived)
  • http://twitter.com/twospoonssoups (not eating there today because the soup I love isn't avail today)
  • http://twitter.com/spudbros
  • http://twitter.com/larkburger (they're not in my daily walk/flow, but now I get good reminders of them and can decide to head over there when in the mood)
More advertisements via Twitter/broadcast please.

Tuesday, March 10, 2009

Startups and Benefits

Gnip is approaching its one year anniversary, and we're in the process of evaluating potential new benefit options to offer employees. I surveyed a few local startups, as well as some early stage VCs, to get their take on how they handle things. The results surprised me.

The question I asked was whether or not you/your firm offered a tax deferred savings plan to your employees (e.g. SEP-IRA, SIMPLE-IRA, or 401k)?

Health Care
The healthcare related information I got back from folks was gravy, as I didn't even inquire about it.

Aside from equity, Gnip offers a range of healthcare plans through its HR/payroll aggregator service. It allows us to offer big company health care options, even though we're a small-time player. It isn't cheap for the company, but the benefit to recruits and employees is obvious (and frankly a necessity in today's broken health care system). One of the startups I queried, the CEO of which I highly regard/respect, put it best with this statement. "We weigh in pretty heavy on employee benefits because it glues them to us and makes it easier for people to rationalize to their spouse why they are in a risky job. If momma's unhappy - everyone is unhappy :-)." That personifies my thinking as well.

Anyway, what I found was that half of the companies offered healthcare outright as a benefit, and those who didn't, offered some sort of monthly stipend (e.g. $500) to each employee so they could go get their own private healthcare (or at least cover some of its cost). Firms with ~5 or fewer employees opt for this option, while firms with >5 employees put real meat behind their health care offering (e.g. offer plans).

Retirement Savings
This is where the surprise came in. I'd say half of the firms offer nothing, and the other half offer non-matching 401ks. It obviously doesn't make sense to match contributions unless your company is profitable, and the firms I queried aren't in the black yet; so, no match. What struck me is that 401ks, matching or not, are not cheap to run on the part of the employer (often >$2k/employee/yr), yet there are cheaper options at nearly an order of magnitude less the employer outlay; IRAs (SEP or SIMPLE). Structuring a SIMPLE-IRA for employees runs approx $300/employee/yr, and gives them the option to put aside $11,500 in pre-tax dollars into the same accounts they'd be investing their 401k dollars in. Of course withdrawal terms vary between IRAs and 401ks, but when faced with no retirement savings benefit offering, or an expensive 401k option, I was expecting more startups to have opted for the cheaper *-IRA offerings. My guess is that most folks have experience with 401ks over corporate sponsored IRAs, and just gravitate that direction because of familiarity.

Gnip's still evaluating things, but I thought I'd share these findings as they might be useful to you.

I'm not a CPA, so any stupid decisions you make based on my statistically insignificant results, are your fault, not mine.

Sunday, March 8, 2009

Dell Inspiron Mini Server Issues

Awhile ago my old Dell Inspiron 1000 laptop's hard-drive failed, so I decided to upgrade it to the Dell Inspiron Mini 9". I need a machine that can sit in a crawlspace under the house, with the lid closed, and run 24/7 as my weather station server for my Davis Vantage Pro weather station.

The Inspiron 1000 lasted nearly a decade in these conditions, with no issues. I was excited about the solid state drive in the Mini 9" as fewer moving parts should keep the thing running longer.

Unfortunately, the machine completely locks up after about 48 hours. I was having to cold-restart it to get it to run for another 48 hours or so, and so on and so on. Obviously not a viable server solution when access to the machine is restricted; as it is in my setup.

My solution was to have a scheduler task run every 24 hours, that would restart the machine. It's running Windows XP Home edition, and as long as it reboots every 24 hours, there are no issues.

Lame that I had to jump through these hoops.

Friday, March 6, 2009

Anecdotes, Advice & Context

Anecdotes and advice are everywhere; always flying off of plenty of lips. I have a few friends in various stages of starting companies, and they're soliciting my advice on various topics. I spoke in front of the Tech Stars for a Day crowd the other day to impart some of Gnip's experiences, and share some of my perspective. Gnip had a board meeting yesterday wherein advice and perspective flowed around the room. Being on both on the receiving end, and on the giving end, of advice and anecdotes these days has me thinking about advice in general (including any conveyed in this blog post), and its worth.

If it's not already obvious, advice is almost purely context sensitive. If I tell you that highly compressed bike tires will make you go faster, yet you're climbing rocks on a very sandy/steep hill on a mountain bike, you will fail. You needed the surrounding context that I was talking about a road bike on a flat concrete surface. Without that context, my advice was worthless (still may be, but you need as much context as you can gather). Anecdotes from a banker about how fly a plane, probably aren't very useful. So, my first piece of advice is that if you're looking for some, make sure its from a relevant person, who's context and experiences you have a good understanding of. You need to know who's dispensing the advice in order to distill the parts that will be useful to you.

Anecdotes from people who have failed are more valuable to me than those who have succeeded. Note I said "anecdotes" and not necessarily advice. In my world, success is primarily a function of luck, so successful people going on and on about how "they do it" is boring and has a higher probability of being wrong. If you want perspective you can use from someone who has repeatedly demonstrated success in a field relevant to you, you have to engage in a deeper conversation with them in order to get better context.

Rules will kill you (yes, even that one). I've distilled my own set, but I always cringe when I find myself conveying some of them to others; when I become "that guy." We all do it, it's natural, but also hypocritical; ahh the wheel of life. "Never look back." "The only way to pull that off is to..." etc. The very things we strive to find, concrete, definitive, clear, statements about "how," are the very things that are usually wrong.

In the end I've found that I'm able to do my best, and have the most fun, when I immerse myself into a context (e.g. building software): its people, its leaders, its successes, its failures, its history, its future; its context. There are no shortcuts. Success is never easy. Steep yourself in the context you want, don't fake it.

I also try to experience as many life contexts as I can. I would be bored out of my mind if all I did was focus on a single context. Perspective on your main context, from within the context of another place, is crucial. Your context isn't the only one that matters, and frankly, it's probably the wrong one, however so slightly off it might be. Context shifts allow you to course correct your perspective from the outside.

There you have it, a pocket full of worthless advice.

Sunday, February 22, 2009

The Mess of Carriers, Handsets, App Stores & Consumers.

I remember the day my mother brought home her first cell phone. It was the true brick (pictured here), and looking back it was the result of the carriers of the day defining the handset as a product that would run on their network, and having someone build it for them. Well, sadly, not much has changed over the past twenty years. With all the innovation that occurs at the handset level (thank you Nokia, LG, Samsung, Apple, Motorolla, etc), the carriers are still the channel for 99+% of handset distribution, and they don't want the great stuff. Sure you can purchase the latest and greatest Nokia handset in its unlocked form from an independent party, and hope that it works on your carrier's network, but you'll pay through the nose for it. I've written on this tri-fecta concept in the past, but wanted to refresh (reiterate) my thinking given recent events.

Carrier subsidies for mass market handsets dominate the landscape. Consumers are price conscious, and the "free" handset when you sign up for a 2-year contract is just too appealing for most folks. The result, is a hamstrung handset market, with watered down, un-interesting devices making their way to consumer's hands. While some handset manufacturers are able to break this mold from time to time (Apple w/ the iPhone in partnership with AT&T), decent handsets are still only a fraction of total sold. Again, people like "free."

One area that is still shrouded in darkness is the notion of the "app store" for software on these handsets/devices/phones. Apple miraculously created (unfortunately probably a one-time aberration) a bountiful online, on-device, store for iPhone software, yet, despite all the recent announcements, no-one else has a useful offering. Pessimistically, I predict there won't be any other credible offering for many years to come. A lesson learned from telco/ISPs long ago was that they aren't willing to give up their multi-billion dollar capital infrastructure investments and write them off as a platform for others to benefit. Cell phone carriers are the mobile parallel to a 'T'. The success of a mobile device software marketplace is 99% a function of carriers permitting such a thing to exist; bottom line. I'll believe it when I see it.

For all the money invested in cell network setup, the carriers aren't about to widely allow other businesses to blossom on their backbone, without a huge piece of the pie. Their demands are so high, that others can't justify standing up businesses (e.g. app stores) bound to the hobbled terms and price-points offered by carriers; the carriers always want to come out on top. I can't blame them, but the sad result is that consumers suffer. Carriers are focused on standing up cell towers, and that's about it. They don't have a clue about standing up complete software ecosystems, yet they maintain choke holds on the handset manufacturers trying so desperately to get their innovations to market.

Until the carriers give up this control, and acquiesce to a smaller cut of the pie, the model will remain flawed. Consumers will continue to use old-world technology and devices when amazing new innovations are possible thanks to the handset manufacturers. Consumers will be unable to have a transferable, interesting, productive, quality, and entertaining multi-media experiences on their phones (unless you're an iPhone user). This seems backwards, but, we all know how monopolized markets always tend to win. What's the answer? Regulation? Consumer demand for a change? I feel like we've at least tried the latter; maybe the iPhone will indeed shake things up beyond Apple and AT&T's little bubble. Wouldn't that be nice.

I dream of a day when I can shop my-favorite-handset-manufacturer.com for killer hardware that runs software I choose, and that will work on a wireless carrier network that is so pushed into the background it's effectively a public utility (e.g. nameless).

Think about it:
  • 2007 was the first year you could buy a device that rode on a wireless carrier's network, that allowed you to purchase a wide array of 3rd party developed software directly to the device. Apple iPhone with AT&T.
  • For over a decade the general solution to cross platform portable software was the IBM Java VM that had to be hand-installed on a tiny slice of phones in the marketplace. As if a general consumer could ever do this.
  • Handset manufacturers have been blowing our minds with incredible devices for over ten years, yet one in a hundred of their innovations has ever made it to the carrier's showroom floor. Sad.
  • Carrier retail outlets that sell plans on their network, as well as the watered down versions of the aforementioned handset manufacturers, are a dime a dozen and commissioned salespeople litter their floors.
  • Handset manufacturer retails stores are non-existent. The rare prototype store crops up from time to time, but they're just showrooms that can't actually sell the device along with a carrier plan.
  • Much like the iPod, Apple is no longer selling the iPhone primarily, rather the iTunes App Store experience as a gateway drug to many happy online app purchases downstream. iPhone == the razor... the App Store, the razor blade.
I would love nothing more than to see Apple & AT&T's model flourish in and of itself, as well as spark revolution in the carrier, handset, and software offering dynamic industry-wide. Wanting this across the board for a long time now, I'm not holding my breath.

Wednesday, February 18, 2009

Three things that have changed my health

When I hit my 30's (I'm 35 now), my body started changing; again. The last time I morphed so significantly was puberty. I noticed my total invincibility and immortality being called into question after hard bike rides, or rock climbs. I thought "what!?!?," what everyone's been saying is true; your body's effectiveness starts to decline with age? Hangovers became real when I hit 30. Prior to that they were a farce as far as I was concerned. I've spent the past few years focusing on a few areas to determine they're overall impact on my well-being. A couple of them are obvious, but are often taken for granted and therefore usually ignored. I thought I'd share my experience with aging in this phase.

Water
As a child I used to get severe headaches. After many medical evaluations, no-one could come up with a resolution. On my own, I established a connection between my headaches and water consumption. Too little water, yielded headaches. Enough water, and I was fine. I've found my overall health is directly connected to my water intake. If I find myself with a cold, or the flu, its severity, or very existence, is purely a function of how much, or little, water I'd been drinking. The lesson here has been simply to drink a few glasses of plain 'ol water everyday. When I drink a lot of water everyday, I feel great. When I don't, I feel tired, and dragged down.

Multi-vitamin
For a few years in a row, I wound up with a chronic, non-productive, dry cough that would last months on end. Again, medical evaluation yielded nothing. I thought I'd try taking a multi-vitamin as just a random attempt at changing my body's chemistry. I've been taking a men's formulated multi-vitamin for a few years now, and the cough abated the very year I started. Furthermore, my body feels much more balanced now.

Exercise
If I don't regularly exercise (as in a few times per week of something) my life goes to hell. I get sick, I wind up with low energy, I gain weight, I get depressed, I get cranky, I get irrational, I get impatient. The time I invest in exercise is a drop in the bucket with respect to overall productivity when I don't exercise.

I've found that if I keep these three things in check, my life is great. If I don't, things go south.

Wednesday, February 11, 2009

Streams

I just got a question via email that threw me back to my days at Netscape/Mozilla implementing nsIStreamConverter. The question is "how do I process streamed data?" There are many answers, but I thought I'd provide a fairly generic one here, along with some pseudo code (a nice mix of C, Python, and Javascript, for your viewing pleasure). But first, I want to try and do a high-level explanation of streams as they can be confusing to folks who are used to dealing exclusively in discrete, bounded, data chunks.

It's important to note that all I/O is stream based at some level; network and disk. When you read a disk from file, the bytes are streamed off the disk, into a lower-level socket API, then presented to you in your application via some "read()" function/method. Many languages do some convenience demarcation for you and allow things like "readline()" so you can easily read a line from a file that is broken apart by EOL markers. If you find yourself on the other end of a raw byte "read()" routine (whether off of a socket or a file; a basic file descriptor), then you're dealing with "streams" and you'll need some incarnation of the following code if you're trying to parse the data.

Some "streams" can be consumed and acted upon in small chunks (either byte by byte, or in chunks), or in large chunks. Some streams are binary, and some are text based. Today's web deals in lots of "text" based streams, so the below example follows that lead.

Hopefully you find this useful.

Imagine some data source providing data to your routine; get_data_from_stream() in this example. Then imagine you want to act on the data as it comes in.


// this is the local buffer we'll use to
// accumulate data from the stream
buffer = ''

// when processing a stream, you need
// to know when you have enough
// data to process, sometimes this is
// token based (a string, or a
// character), sometimes it is after a
// certain number of bytes (in
// which case this token is irrelevant).
// In this case, I want to do
// something once I've reached the end
// of an RSS entry. this
// processor handles "entries".
demarcationToken = ""

while ( data = get_data_from_stream() ) {
// take the data from the stream, and
// append it to the local buffer. this
// allows us to grow the local buffer
// until we have enough data to digest
buffer += data

// determine whether or not our processing
// specific demarcation exists yet
if ( tokenPosition = buffer.contains(demarcationToken) ) {
// we found a point in the buffer that will allow
// us to do some processing

// define the chunk to process, as the buffer
// up to the end of the demarcationToken.
chunk = buffer.subString(0, tokenPosition + len(demarcationToken))

// now do something with it
do_something_with_the_chunk(chunk)

// reset the buffer to the position beyond the chunk you just
// processed.
buffer = buffer[tokenPosition + len(demarcationToken)]

// rinse and repeat
} // end if
} // end while

Tuesday, February 10, 2009

Customer Support and Clouds

You know your business is becoming a commodity when your competitor differentiation becomes "better customer support." If you get to this point, you better be a significant marketshare player in your space otherwise the end is near. Decent customer support is very expensive, and therefore very few firms, in any space that I can think of off the top of my head, bother with it. Instead it becomes a race to the bottom. Consider utility companies, power, water, mobile service carriers, local hardline phone carriers, etc. Customer service for all of them is cast to lowest common denominator.

I'm seeing Cloud computing services start to differentiate on customer service; whoops. It'll be a matter of a few years before that game's over. SLAs are one thing, but customer service is another. Focus on tangible, contractual, SLAs, not "better customer service."

Thursday, February 5, 2009

Testing 1, 2, 3

Gnip received a request to go over how we "test." I hope the following is useful.

While we don't practice TDD (Test Driven Development) outright, I'd consider us in that vein. We are test heavy, and many of our tests are written before the code itself. My personal background is not test driven, but I'm a convert (thanks to the incredible team we've pulled together). While it takes self control, the satisfaction of writing a test first, then building code to meet its constraints feels great when you're done. Your goal, at whatever level the test was written at, was clearly defined at the start, and you wrote code to fulfill that need. Ecstasy! Try it yourself.

Our build process includes execution of ~1k tests. You don't checkin code if you break any of those tests, and code you checkin has new tests to validate itself. If you "break the build," that is not nice, and peer pressure will look down on you so you won't do it again.

The range of tests at Gnip are a challenge to categorize and build. Component/unit level tests are relatively straightforward, and range from class drivers, and data input/output comparisons against expected result sets. Writing tests when much of your system is unpredictable and variable is particularly challenging. Gnip works with so many different services and data formats, that getting the tests right for all the scenarios is hard. When we do come up against a new failure case, a test gets written to ensure we don't fail there again.

Given the "real-time" nature of the Gnip core platform, benchmarking the performance of individual components, as well as end-to-end data flow is fundamental. We build "micro-benchmark" tests to vet the performance of proposed implementations.

Testing end-to-end performance is done a variety of ways. We analyze timing information in system logs for introspection. We also run scripts external the system to test both the latency of our system, as well as that of the Publishers moving data into Gnip. Those tests calculate the deltas between when an activity was created, published, and ultimately consumed out the other end.

The importance of both internal testing, as well as testing external to the system cannot be over stated. Testing various internal metrics is great for internal decision making and operations, however you can lose sight of what your customers see if you only view things on your side of the fence. We use http://pingdom.com (Custom checks) to drive much of our external monitoring and reporting.

Here's some insight into our testing tool chain:
  • JUnit, EasyMock (Java)
  • HttpUnit, bash/Python scripts/cron (general API pounding)
  • unittest (Python)
  • RSpec (Ruby)

Wednesday, February 4, 2009

Öm

My experience at Gnip has tested me in uncountable ways. From technical capabilities, leadership, and work-life balance, to trouble sleeping due to inability to turn my brain off at night as I churn on our challenges. What I've resolved as the most important thing for me to master in order to make Gnip successful, is my ability to stay on my game despite at least one bummer in the mix at every moment.

Staying positive is the only way to make this game work. When you are in a leadership role at a startup, you are responsible for driving a team with constant impossibility flowing around you.

At a talk he gave last night, Brad Feld, summarized this notion succinctly. To paraphrase, "every single day, there is something going on in my purview that is absolutely abysmal. Deriving energy and passion from it, and, more importantly, the overwhelming amount of positive events, people, projects that comprise the day, is the only way to make it work." It's a very "zen"-like perspective, and one I'm refining and fostering for myself.

If you need day-to-day balance in time, focus, and energy, startups likely aren't for you. You have to find consistency in chaos, clarity in mud, and calm in storms.

Peace.

Saturday, January 31, 2009

Gnip: What's worked well, and what hasn't.

Gnip's almost a year old. That combined with a great week which included a stellar board meeting, technical clarity around specific areas, business/requirements clarity around specific areas, and an impending big release motivated this blog post.

We have built an amazing back plane that's driving a true marketplace for modern data, while solving fundamental data access issues on the network today. We have a growing revenue curve with tons of blue sky ahead. We're turning our back on verticals that would kill us, and embracing the sweet spots. It's always a difficult road ahead, but just like 2008, 2009 is going to be a killer year for Gnip. I'm so stoked to be a part of this.

I wanted to take a quick look back on 2008 and outline the positives and negatives that stick out in my mind; maybe you can make use of them in your startup.

On the plus side we...
  • Never compromised on hires, despite delivery schedule impact.
  • Hired experience, breadth and depth. While expensive, it sure beats babysitting and training kids when you have to be executing a mile a minute.
  • Leveraged "the cloud" for ultimate hosting environment flexibility.
  • Solved real problems in the industry. One of our investors, First Round Capital, sums this approach up well on their website: "Companies must provide a unique solution to an existing urgent need. We don't invest in companies which try to change consumer behavior or create a new consumer need."
  • Made pragmatic decisions to cut features that were weighing too heavy on operations, despite vocal community suggesting we do otherwise. Don't drown yourself.
  • Let the business definition, and customers, evolve naturally. Steadfast tunnel vision would have left us in the dust.
On the minus-side we...
  • Ran too long with too big a feedback loop between business, technology, and customers. From software development, to business evolution, small, fast, tight feedback loops are fundamental. Hell or high-water, make certain any open questions get nailed down immediately.
  • Didn't "load test" all possible scenarios (difficult to predict no matter how you slice it). Until your business model is crystal clear, pressure every edge of your software so you know where the sweetspots are, and where the fragility is.
  • "Pragmatically" pair-programmed for too long. Wishy-washy process led to confusion, lack of clarity, and inefficiency. Once we drew the line in the sand and declared to ourselves that we were _a pair programming shop_, no questions asked, the seas parted and clarity became crystal around the process.

Friday, January 9, 2009

Dear Federal Government: Invest in Schools

When you're scrambling to figure out how to inject nearly a trillion dollars into an ailing economy, please consider some of the more obvious "trickle down" venues for spending (yes I'm fully aware of who coined "trickle down" and that it was squarely focused on private markets, but I like the irony of using it here). I appreciate the thought of converting government buildings to be more energy efficient, but when you're having trouble spending a trillion dollars because so many of the potential programs don't pass the "pork barrel spending" test, consider our public education system. Any American would appreciate any amount of money going into schools, and I suspect you could easily burn a few hundred billion dollars to set the stage for a highly educated USA in coming generations.

I'm lost when I read in the New York Times that the Obama crew is struggling to find programs that add up to $800B, and there has never been any mention of public education as an outlet. The jobs created in building new schools, renovating old ones, employing more teachers, finally getting teacher pay to a livable level (at least!), providing modern learning tools and programs, modernizing the public schools bus transportation program in an energy efficient manner, and on and on. Open your eyes folks, or explain to me why the schools aren't a good place to dump untold sums of money; "privatizing schools" doesn't count as an answer BTW.

Thursday, January 8, 2009

Estimating Code (Stories)

I've worked in several "Agile" development teams over the years; from developer, to manager. Estimates are always one of the challenging things around software; both development's actual estimates (making them is very hard), as well as management's interpretation/understanding of them.

Some advice to developers:
Never utter the words "that is easy," because if you do, rest assured that everyone in the room has a radically different definition of "easy" (which is fluid in and of itself), and 3/4s of the room will expect the work to be done before the conversation is over. When I hear the words "that's a one-liner," I cringe. That "one-liner" inevitably takes hours of work. Try to consider the full spectrum of your process. When you make that "one-liner" change, what are the ramifications to your test infrastructure, for example? Whatever environment, or team setting, you are in, you need to understand it so you are able to fully estimate a Story.

Too often developers have tunnel vision into their specific area, and we lose sight of the bigger picture. Story/work estimation is part of a bigger picture, and developers need to be aware of that when we're estimating things.

Some advice to managers ("managers," product managers, project managers, whatever):
If you ever hear the words "that is easy" or "that's a one-liner," take it with a grain of salt. Gently prod to vet the surrounding estimates. The coding itself, may indeed be an "easy" "one-liner," but the ramifications to tests that have been written, or to QA, or to operations/production, or deployment, or system administration, may be huge.

Everyone on the team needs to have an understanding of their process and software life cycle in order to have a sense of what code is going to do downstream. There are plenty of "one-liners" that can bring your software to it's knees for weeks.

Software construction doesn't work well when the "customer" (Product Management, CEO, contracting client, yourself, whomever) is disconnected from the process. That disconnect can be organizational, emotional, intellectual, or otherwise. If you're building software and you feel disconnected from your customer, fix it. If you're having software built for you and you feel disconnected from development, fix it.

Sunday, January 4, 2009

I can't believe I have to say this...

I assume everyone around me has the same thoughts, background, and experiences as myself, buzzing around in their heads. I'm always amazed when I discover this isn't true. Note, the sarcasm. I just noticed something disturbing in a demographic I wouldn't have expected to see it in; twitter. Twitter is a niche marketplace with tech savvy users (or at least it used to be). However, evidence of Twitter going mainstream just hit me in the face. Twitter just blasted a front-page service warning to users about a phishing scam some of their users have seen, and fallen victim to. I was taken back to my days at AOL when corporate communications felt it prudent to internally educate employees about phishing scams. I knew I was in trouble when I saw one of the Internet's supposed powerhouses having to educate its employees about such a topic.

I can't believe I have to say this, but... dear latest-web-generation, understand the risks around you, as well as how to avoid falling victim to scams, while reading email and web browsing.

Here are my general safety/scurity rules of thumb that apply to surfing anything running in a browser (webmail, shopping, browsing, whatever). I should disclose that my world view is confined to Mozilla based derivative technologies such as Firefox and Thunderbird, though there are generally equivalents to these tips/techniques for IE.
  1. Pay attention. There's no super secret amazing technology that lets a wrongdoer steel your information magically. Computers, the Internet, and email are surprisingly secure. 99 times out of 100, if someone lost information, it was because they weren't paying attention. They either clicked a link they weren't supposed to, and exposed themselves to some form of phishing, a bug exploit, or downloaded software they shouldn't have.
  2. Use a decent password. Something like 2/3's of all MySpace account passwords are the word "password." You can guess the quality of the remaining 1/3rd.
  3. Clicking on links, even "evil" ones, is ok; it's what you do once you've arrived at the final destination where you can get into trouble. Every now and then someone exploits a browser security hole and gets you to go to the webpage that does the exploiting, but these are very rare. Related to #1, pay attention on every page you're on.
  4. Never give a web page information if something (e.g. an email from your "bank") directed you to the page to do so. The old-world equivalent of this rule is "never give your social security number to someone who calls you on the phone." There is never a legitimate email, or web-page that asks you for personal information of any kind, if it initiated the exchange; 99 times out of 100 it's a scam. Giving web-pages your information is perfectly secure, as long as you initiated the exchange. For example, if you get an email suggesting your account information is out of date, guess what, it's not, someone's trying to scam you. If your account information is out of date, you'll find out on your own terms; only update account information under those circumstances. Legitimate services' privacy policies and terms of service clarify that they will never ask for your personal information, under any circumstances; that's all you need to know.
  5. Always notice the fully-qualified-domain-name of a web-page you're about to enter information into. If it's not where you think you should be entering information; don't. Just pay attention to the domain name; 99 times out of 100, that's enough. For example, http://paypal.services.com is bogus. http://services.paypal.com is valid.
  6. Always ignore graphics and text in a web-page that say things like "this page is secure." The only thing that guarantees encrypted security of data transmission is an https URL (e.g. https://your-bank.com). Be aware of your browser's encryption/security UI elements and features, and look out for those. I have never seen a browser exploit that fooled the base-level ssl certificate UI; trust your browser.
  7. Hover over a link to determine where clicking it will take you. If the hovered link isn't where you want to go, don't bother clicking on it. I suspect there are some cute DOM tricks to obfuscate where clicking a link will take you, so to be really sure about where clicking a link will take you, select the link text (note, that's different than a click), right-click it, then select "view selection source" from the context menu. From there, you'll see the actual href that will be navigated to upon click.
  8. If you're generally suspicious/interested, install Live HTTP Headers for Firefox based browsers, or HTTP Watch for IE, and watch/block the traffic you don't want. Firefox 3.1 (finally!) supports native HTTP header interrogation as well. Just view the page info (Tools->Page Info) and click the "Headers" tab.
Good luck.

Saturday, January 3, 2009

HOW-TO: Force incompatible Firefox Add-ons to Install

If you ride Firefox versions harder and faster than add-on developers upgrade their add-ons for compatibility, this post is for you.

Firefox add-ons include their Firefox version compatibility within their .xpi install file. Often developers will put a "maxVersion" field in their add-on to mitigate potential compatibility issues between their add-on and future version of Firefox. 99% of the time however, there aren't any compatibility issues when there are minor (or even major) version upgrades of Firefox, yet you're prevented from installing your favorite add-on because of the add-on thinks it's incompatible. Here's how you can force the add-on to install. NOTE: your mileage may vary; you may indeed be installing an add-on that is not compatible with your up-version of Firefox; use at your own risk.
  1. Download the .xpi for the add-on you want, directly to your hard-drive, bypassing the default installation behavior of Firefox. To do this, find the .xpi file for the add-on, generally through a Google search. addons.mozilla.org tries to be smart and prevent you from even downloading add-ons that aren't compatible with your browser, so you have to work around this and find the direct link to the .xpi. Once you find a link to the .xpi, right-click it and select "Save link as..."
  2. Crack open the .xpi file. .xpi files are simple zip compressed archives, so you can unzip them like any other .zip file. "unzip your-file.xpi". That will un-archive/un-compress the contents of the .xpi file. You might want to do this in a dedicated directory for cleanliness' sake.
  3. Open "install.rdf" (that was extracted in the previous step) in a text editor, modify the "maxVersion" field and save the file. Make sure this field is at least at the level of your version of Firefox. Use '*' accordingly to indicate any version number.
  4. Re-generate the .xpi file with the updated install.rdf file. You do this by running the "zip" command like so "zip -f your-file.xpi".
  5. Install your-file.xpi by opening it in the browser via File->Open. Follow the install steps Firefox presents.