Wednesday, December 26, 2018

Machine Play


The only really cool software things that have come along in the past several years have been a rise of the machines in the form of Machine Learning, and serverless/microservice/Kubernetes clouds. The former is a complete framework game changer for humans, and the latter just a really cool evolution of a concept that's been lingering since the dawn of computing.

Amazon's foray into the computer vision market was DeepLens awhile ago. I thought it was odd that they combined both the software and hardware (camera; note, the camera is only suitable for indoor use) ends of the product. Why in the heck would you include a camera with a product like this!?! Several months later the Goog released access to their genericized ML backend; Google Cloud Vision in the back of AutoML was cast unto thee.

Well, after using Google's Cloud Vision and a 3rd party outdoor webcam to cobble together a system that SMSes me when one of the deck furniture covers gets blown off in the wind (project breakdown below), I now see exactly why Amazon bundled the two. I only wish they'd provide an outdoor-grade camera version of the offering. It turns out, unsurprisingly, that the pics you feed the backend are such an integral part of the training/prediction process, that tightly coupling the two is very important (well done Amazon Product Managers who realized this early). I believe Amazon also does onboard model prediction which sounds cool, but I'm not getting the necessity of this feature. I think the Goog got it right by just offloading the evaluation bit to the network (their modus operandi obviously) via URL image data retrieval. Sure there are applications for on-board execution, but, they seem more specialized than most of the use cases will require.

The Project

I cover the outdoor furniture on my rooftop deck with covers to protect them from the elements (Colorado is pretty tough on the weather front). It's often really windy up there and the covers regularly get blown off. The problem is that I don't get topside as often as I'd like, and the furniture could be left uncovered for awhile, exposing it to the damaging sun. So, I wanted to get a notification when they were blown off so I could go re-cover them. Obviously traditional image parsing/recognition solutions would be horribly unreliable and hard to use in order to accomplish this, so, enter ML. I wanted an outdoor-webcam to take pics of the deck and have Google Cloud Vision determine with high accuracy (90%+) whether or not a cover had been blown off, and text me if so.

The Pieces

  • Input
    • Amcrest outdoor IP Camera - mounted outside and aimed at the deck furniture. the camera hardware for this is great, the firmware/on-board HTTP server/app is awful and stuck in the late 1990's. if anyone has experience with a better outdoor IP webcam (no, Nest doesn't natively work), please let me know.
  • Processing Nodes
  • Software
    • my app/driver - Python script (code is here) that cron runs every ten minutes on the droplet.
    • FTP server - vsftpd. the webcam's firmware design is as old as dirt and only talks FTP for snapshot images.
    • bash script that cron runs every ten minutes for copying/renaming latest image capture to the HTTP server so the main app can access it.
    • Google Cloud Vision - used to predict whether or not an image of my deck furniture has any of the covers blown off of it.
  • Output
    • Twilio API - used to send me a SMS message when a cover has been blown off.


The process of building a model on someone else's engine is unbelievably simple. If you can imagine it... the computer can model it and predict it.

Labeling image data is a major pain and very time consuming. While model prediction/execution is super fast after you've trained it, the labeling process required to train is horribly cumbersome. Looks like someone's entered the market to start doing the hard work for us; I'll give CloudFactory a try next time I need to build a model (which is pretty soon actually given that my cover configuration has changed already).

We are going to accelerate from zero to sixty very quickly with ML backed image apps. I imagine app providers providing integration solutions with existing webcam setups that allow consumers to easily train a model for whatever visual scenario they want to be notified about (cat is out of food, plant is lacking water, garage door is open, bike is unlocked, on and on and on). Of course, you can apply all of this logic to audio as well. The future is going to be cool!

What Could Be Better

  • I should collapse the file/FTP server and the app server onto either the WAN based Droplet, or the LAN based Pi server.
  • The webcam. While the hardware is great, the software on the camera only supports SMB/FTP for snapshot storage. If the camera supported snapshot via HTTP I could forgo this interim image staging framework entirely. There might be joy in this forum post... I'll need to dig in and see;
  • I need to format the SMS message to be Apple Watch form-factor friendly.
  • I need to reap/cull images after some duration.
  • As far as I can tell Google Cloud Vision data models can't be augmented *after* they've been trained. I'd like to add revised image data without having to rebuild/retrain the entire model. This seems like a pretty big bug. All of the image ML prediction scenarios I can think of are going to trend toward wanting to augment/add new image data over time without having to maintain the original seed model data.

Friday, December 21, 2018

Taking Charge Of My Attention

Over the past few weeks I’ve experimented with leaving my Phone at home when I head out for the day. The release of iOS Screen Time shocked me. Having a look at my raw usage data around how much I was using my devices/apps pushed me into trying some big changes like leaving my Phone behind.

No surprise, I haven’t really missed my phone. What made the shift possible was that my Watch lets me do the communication-in-a-pinch and payment stuff I need during the day when I’m not near an iPad or laptop.

That said, I wish I had my phone with me when I want to take a picture of something, and when I want to use a home automation app. That’s pretty much it though. If I can keep this up I’m going to look into a point-n-shoot camera I can tote around in lieu of the Phone.

As this experiment evolved, my awareness (experiential as well as stats from Screen Time) surrounding Notifications heightened. Possibly more poisonous to society than screen time itself, is the interrupt driven life we lead thanks to Notifications.

I vividly remember when Apple released Notifications on iOS. I was enamored and immediately foresaw a future wherein asynchronous, rich, notifications would allow deep-linking into our apps. Well we’re pretty much there and it’s a nightmare come true (just go look at your Settings->Screen Time->Notifications). Remember when you realized you were a Pavlovian dog hitting “get mail” every time your mail client would ding at you about new mail? Transfer that behavior to dozens of apps on your mobile device. Dopamine drip. Drip. Drip.

I’m slowly shutting Notifications off completely (Watch, iOS, and OSX) in most of my apps as I realize that I really don’t need to know, save for just a few cases, when some app has something to say. When I want to know something, I’ll go check it out on my own; on my time, not someone else’s who is simply trying to “drive engagement.” Needless to say the meaning of “breaking news” left us long ago, so I don’t need those notifications.

One elusive app has been the Phone app which rings with spam all day long. I installed an app called Hiya which does a great job blocking the non-sense.

Unfortunately neither iOS nor OSX support system-wide Notification disable. You can kind of hack around it with extended “Do Not Disturb” schedules, but you wind up doing damage in other areas that way.

In the communications app category (iMessage... email) I’ve realized there’s a missing level of Notification behavior that I’d like to see. Something like “Response Notifications.” As A User, I Want To know when someone has responded to communications I have initiated, In Order To receive notifications I care most about. If I initiate an exchange, I want to be notified when others respond. If someone initiates an exchange with me, I’ll get to it on my time.

I can hear the people that have been saying “do you really need to check your mail or text that person as you walk down the street” for a decade now, ringing in my ear. Guess what, I don’t.

Digital life is messy.

Friday, October 26, 2018

Honey in Boulder

Our Boulder office foyer.

Earlier this year I joined Honey in order to help them scale product and development beyond their headquarters in Los Angeles. The company is growing rapidly and in a look toward the future, it wanted to distribute and diversify its ability to build software. I’d been involved in the company in advisorship and investor capacities over the years, and their desire to scale beyond Los Angeles coincided with mine to get back into building software sustainably in a full-time capacity; right place, right time.

Honey’s in a rare position. They have something every company wants: millions and millions of users and the beginning of a hockey stick. I’ve seen, and have participated in, incredible growth, but prior to Honey, I’d only ever read about growth (existing, and projected) like this. I’ve never actually sat in the saddle of a hockey stick; it’s trippy.

When revenue maps to growth like that, a company gets to do some impressive things. Chiefly, they are able to place bets and take risks that others cannot. They’re able to explore new product areas and invest in tangential industries that are generally considered new/different companies altogether, and therefore rarely get to be pursued by existing team members. Honey’s able to leverage its wealth of activity and usage into new fields. While the fundamentals have to be in place for that to even be an option, it takes strong leadership to effectively create and support these kinds of efforts. It’s not easy running a large business and firing up new ones alongside. Yet, here we are. I’m beyond impressed with Honey’s senior leadership team; careful, calculated risk.

Over the past several months the start of our team in Boulder has come together. New people with new backgrounds and experiences coming together to sustainably build software. We’re working on backend systems stuff in a new product area. We’re blending new ways of doing with an existing core system built on top of Google’s Cloud Platform (which is new for me... I come from AWS land). Aspects of the system move data at thousands-of-transactions-per-second rates, so we have our hands full. But, that, coupled with greenfield product, makes it fun and engaging and challenging. Node, Scala, microservices, PubSub, and the GCP toolchain. Our VP of Engineering, Sam Aronoff has some posts up on our tech blog here.

If you’re interested in helping the world be more fair, join us. If you’re interested in working on big data challenges, join us. If you’re interested in supporting an internet that has hundreds of thousands of independent retailers/merchants (instead of just one), join us. Here are the roles we're hiring for.

Tuesday, August 7, 2018

Navigation & Route Hacking

While the AI/Machine-Learning battle is likely over in the long run, I'm surprised municipalities haven't figured out real-world hacks to game all the mapping/routing apps away from the frustrated neighborhoods getting clogged with traffic. Waze/Google Maps/Apple Maps/etc all rely on public databases to describe the roads upon which they build routes and maps. Those databases indicate things like speed bumps, traffic circles, crosswalks, and so on. The routing algorithms leverage that information in their calculations for "fastest route" and "shortest route." They then go onto generally avoid them when determining routes. If neighborhoods vote-in a traffic circle or speed bump or two, they can knock their routes out of the routing algorithm's choices to present to users, and push traffic back to the roads meant for heavier load and higher speeds. Not only will automated systems adapt away from the obstacles, but crowd sourced systems will likely trend away from them as well.

Of course, in the long run, the system will optimize the road network at large, and all the crevices will be filled in the end, but, it's a short term hack few munis appear to be leveraging. Los Angeles might be cluing in though.

My involvement with UrBike over the past year, and Carmera over the past few, has opened my mind to just how broken the U.S. is in terms of personally owned automobiles. We've spent too many of our resources building roads and parking places for hunks of metal to sit idle for 95% of their existence.

Go get on a bike (or a skateboard, or a scooter, or _something_ other than a car).

Sunday, May 27, 2018

My Brush With Technology In The Classrom At Scale

The 2017/2018 Boulder Valley School District (BVSD) school year has come to a close, and with it the DTAC group is wrapping up for the year. I will not be pursing a role on the committee this next cycle (thought I encourage you to do so to get a sense of what’s happening). Instead I will be putting my resources into district board member lobbying and campaigns. I am lobbying for district policies that ban cell-phones (personal communication devices) in middle-schools on down, and campaigning for prospective board members that have an understanding of the impact screens are having on our childrens’ growing minds.

DTAC is a well organized and executed committee and I applaud our district CIO, Andrew Moore, and his team for actively engaging the community; thank you. Unfortunately, his team has been given an impossible task. BVSD is attempting to modernize itself with hundreds of millions of dollars in bond money, much of which is being spent to support its “1:Web” initiative. Our CIO has been tasked with bringing our schools “online” and figuring out how technology gets purchased and deployed in the classroom. Unfortunately, there’s a lot of cart being put in front of some really big, strong, fast moving horses. The Board has NOT provided reasonable guidance or direction at the policy level, and the CIO’s office is left trying to interpret direction and meaning, field extremely difficult questions from parents and students, and manage the deluge of technology vendors who have shown up at the bond money trough to feast and sell expensive products to a district that lacks a cohesive, safe, technical strategy to rollout.

There are a few massive challenges we, as a society, need to come to terms with before public dollars should be spent trying to rollout “technology” in classrooms.

Personal Connected Devices

In a nutshell, these are today’s “cell phones” (iOS/Android devices with SIM cards in them). Cell phones are destroying in-person social interactions at our schools, and ruining classroom participation dynamics. Teachers have become cell phone baby-sitters dealing with an incredible new level of distraction in the classroom, instead of being... teachers. To further complicate things, our kiddos use their cell phones as WiFi hotspots and connect their school-provided chromebooks to them to circumvent the expensive IP network filtering we deploy on school networks to protect our children from bad online content. To stop the hemorrhaging of effective social interactions, friendship bonding, social learning, and _teaching_ in the classroom, I recommend a zero tollerance ban of personal connected devices in our middle and elementary schools, and that school provided chromebooks be locked down to only connect to whitelisted WiFi networks. Yup, you heard me. Ask a teacher about their experience with cell phones in their classroom, and go read the book “Glow Kids” or watch the movie “Screenagers.”


The current wave/generation of staff/educators do not know what “digital curriculum” looks like. A few of them do, but the vast majority do not. They do not know what “digital citizenship” looks or feels like, nor do they have a cohesive understanding of how, and when, to do certain things digitally. It’s been a disturbing several years as a parent watching my kiddos manifest science projects in a slide deck. Machines have a place in education, but we haven’t figured it out yet, and we’re losing generations of children to broken programming/curriculum. I recommend significant research into what new-age teaching and curriculum should look like, and then training/developing teachers to effectively apply it.

Addiction Services

When a child shows up at school grappling with a drug addiction, we lend them a hand. Unfortunately, we do no such thing for the droves of kids addicted to their screens. School counselors are often addicted themselves, so, we’re lost on an entirely new level. The world has not figured out how to handle/manage personal connected devices/screens, and we’re educating generations of kiddos in this environment. I recommend effective funding/staffing for counseling services to help our children navigate the new addiction.

In general, I believe we need to slow down the introduction of technology in our classrooms, and roll it out only when we understand it better. I’m bummed my kiddos are going through school amidst such a massive experiment.

Tuesday, April 24, 2018

The Connectivity Fallacy & $(window).load(function())

While the Network as a whole is borderline miraculous, the reality of its connection quality is far from it. Connectivity sucks, even in private-industry led first-world connection environments. Fiber backhauls are generally pretty good, but "last-mile" services suck. The issue is usually around latency, but bandwidth throughput is also generally inconsistent and poor (and about to get a lot worse if/when Net Neutrality dies). I'm pointing the finger at cell/mobile carriers mostly, but also at cable providers. Satellite carriers don't count because the technology just plain sucks (latencies between ground/low-earth-orbit sats are too high to be generally useful); it's cute, but it sucks.

We all stare at our screens waiting for content to load. Whether that's an image upload in an iMessage exchange, an Instagram image load, mail coming in, or web pages loading, we spend way to much time looking at blank screens or spinning graphics indicating "progress." I'd like client app providers fire OS-level notification events that indicate network operations are complete. This way, I could open a web page, then put my phone back in my pocket while it loads, then pull it out once it's done loading and the notification fires.