Tuesday, January 22, 2019

Finally Deleted Uber

This was a very long breakup. I need some space to write it all down.

The Last Straw

Last night I deleted Uber from my phone. After waiting for forty minutes at LAX, I watched my car drive by as the app told me “ride cancelled.” In the end, I’ve simply had too many “ride cancelled” experiences from drivers. While last night’s cancellation was of a different, inexplicable, kind, generally after accepting your ride at a major airport a driver calls you with a feigned “I’m on my way” (when in fact they are not, and instead they’re sitting in a holding lot trying to determine whether or not you’re worth pulling out), only to then solicit your destination from you. Technically they’re not allowed to ask this question due to regulation against location discrimination, but, they do, and you’re left with a choice, to either disclose the location in hopes that it’s convenient enough for them that they’ll keep the ride, or to withhold the information in hopes that they don’t cancel the ride. Obviously they receive the destination information once you’re in the car, but, for them, that’s often a ride risk too great to accept. Anyway, over the past few years this has become the norm at major US airports, and when they don’t like what they hear, the cancellation can lead to non-trivial delays while you regroup and try to find another car (and go through the dance again), not to mention the awful customer experience the whole things yields (yeah, I want to handle a phone call while I’m darting off the plane in a sea of people while scrambling to use the restroom, and queue for escalators and trains, often with kiddos in tow).

I’d been hanging on to the Uber app for a handful of reasons, but they’ve, in some cases literally, fallen away. 

Governance

I’d wanted to remain a loyal and supportive of people I know that used to work at the company. Watching friends fight broken systems of regulation and governance and control that were no longer serving the populace as well as they could, was vicariously thrilling. It’s like I was there fighting along with them!

This was really why Uber started. If you never experienced the medallion based taxi cab system in San Francisco pre-Uber, consider yourself lucky. The City distributed a precious few medallions for a major metropolitan area, and you could easily wait thirty minutes for a cab to appear (if it appeared at all); I always thought it was strange how few cabs there were in SF. The car was falling part, and the driver could barely navigate the city to your destination. It was a total mess, and Uber wanted to fix that.

Early on Uber was great obviously. Nice cars (private “black car” drivers with sweet cars and keen navigation knowledge) would pick you up at the tap of a screen. Now “black car” quality is often worse than the old busted-up cabs, and the drivers are generally more clueless than the original taxi-cab drivers; relying on a mapping app to get them to the destination. While I’m on the topic, a quick run-down of the service tiers.

- UberX: general population drivers/cars. Drivers are usually really nice, but they’re not professional drivers... they’re just like you and me, driving our cars around; only with passengers.
- Select: general population drivers, with cars that are somehow a step-up from baseline. This is my go to category. You usually get a driver who cares about his car (quality/maintenance are generally much better). Their navigation knowledge is the same as UberX though.
- Black Car: these are drivers who generally drive “for a living.” The cars are crap (run-down), and the vast majority of the time you get a much better car if you choose Select (when available in your area). You often get SUVs when you pick this category, and I hate riding around in big trucks. I avoid this category entirely. Nothing good comes of it. Furthermore, in major US cities, the drivers are clearly stuck in some indentured servitude relationship with the car owner, and it just has a bad feel.
- Lux: LA is the only city that gets lux, and it is awesome! Killer cars (think Benz S-Class (no Cs or Es allowed)... Porsche Panamera... 7-Series BMWs) and the drivers are true pros. If they use an app to navigate at all, it’s Waze for crowdsourced feedback. These are drivers that want to drive, and love to drive, and know how to drive. Not cheap, but great experience.

Uber blazed the trail for regulation challenging. While they’ve made room for themselves, and other ride-share services, they’ve also made a lot of enemies, and that hurts the customer. As a frequent traveler, those enemies often come in the form of the unique regulatory bodies that manage traffic/parking for international airports. In most major US airports ride-share services have been regulated exactly as taxi-cab services have. They have their own pick-up/drop-off zones just like taxi’s. There’s just one problem, airports are nothing more than a series of queues, and if you’ve ever worked with queuing theory, the worst thing you can do to service the queue, is to randomize queue extraction, which is exactly what ride share services do. To understand what I’m saying, witness the “ride share pick-up” zone melee at LAX. Clients and drivers desperately trying to find each other amidst traffic chaos. It’s a mess. LGA is doing some interesting things to try and fix this. True car queues (which taxi cabs have always used) are much more efficient at airports. I’m going to try taxis at airports again for awhile, to see how the experience compares. After all, trying to negotiate a ride with a driver that’s likely just going to cancel on me certainly isn’t working.

International Support

Another reason I hung on to Uber was international support. Several years ago Uber started making inroads in other countries I’d travel to, and it felt like it was going to be super convenient to cut through the ride-negotiation and navigation language barrier by putting my destination in an app. However, ride-sharing in other countries hasn’t been as well received as it has been in the U.S. (and I suspect this is because other countries generally managed taxi cab regulation better than we did, and the populace was just fine with how things were working), so often there is literal civil unrest amongst the taxi cab drivers and their unions, or governments have regulated such a small footprint for Uber, that as a consumer, the product experience is pretty much non-existent. If you’ve ever tried to communicate to a driver in an unmarked car in a foreign language where you’re standing, and where they’re parked, you know how painful the experience can be. Uber is just not as acceptable or nearly as reliable as it is in the U.S., so years ago I fell back to using local taxi-services, or private services, instead.

A sampling of experiences.
- Barcelona, Spain. Uber is allowed to service the city, but, there are only a few cars and they’re never available. Unusable.
- Tokyo, Japan. Cell coverage in the warrens of the city is generally too poor to reliably use an app for transportation. Besides, the taxi cab system here is so damn good, there’s just zero need for an app based solution. Why bother?
- Paris, France. Somewhat reasonable, though, there’s such backlash against ride-sharing, I always feel like I get a scowl from the doorman at the hotel when I return; not a good feeling. Peer pressure.
- Quintana Roo municipality, Mexico. yeah right. The local taxi union will practically run an Uber driver off the road. Too dangerous.

In the end, other countries generally have their public transportation setup in a way that just works 100x better than the U.S., so, it’s easier just to use that.

NYC

I spend a fair amount of time in New York. I gave up on Uber there long ago due to driver/car quality issues, and have used a private car/driver service there since (the taxi cab cars are poorly maintained and generally un-clean). I’ve also developed sympathy for taxi cab medallion owners (and downstream driver relationships) in NY. I do believe the NYC taxi commission failed to uphold its agreement with medallion owners, and they truly did get screwed (a simple look at medallion pricing graphs proves this point). The municipality clearly stated the medallions provided exclusive rights for ride-hailing, and, yet ride-share services have prevailed. I’m fine with open market winning here in the end, but the taxi commission should recompense the owners for the gap in this case.

Onward

I will miss the Apple Watch Uber app for sure (Lyft doesn’t have one). I often do long one-way trail runs, and being able to hail a car on the other end from my watch was awesome.


Over the past few years I’ve had a handful of experiences with Lyft that have impressed me. I don’t understand why two companies providing seemingly exactly the same service at the core, could be so different, but they are. Lyft drivers are somehow statistically kinder, and I’ve never had a pick-up cancelled on me. I’m on Lyft now. I’ll report back in a few years to let you know how it goes.

Monday, January 21, 2019

!Notifications

A week ago I disabled all notifications across all of my devices. Life is better this way. It's been an interesting experience that reminds me of life before mobile devices. How did we ever think it would be a good idea to sound that little email "ding," and put a message count graphic on everything?!? The other day I wrote a post called "Taking Charge Of My Attention", about leaving my phone at home thanks to my Apple Watch. This post is kind of a continuation of that thought. I believe multitasking is a fallacy; the human mind does not function well with interrupts.

Observations

  • Surprisingly, others are generally offended or upset when they learn that my notifications are disabled. I'm curious as to where this comes from. Is it rooted in envy? Disgust? Concern for me in that I might miss out on something?
  • Disabling notifications is actually quite difficult as the OS builders do NOT want us doing this. Fewer notifications means less engagement with their products. The list of features I would like to see around better supporting notifications control is long, and I can't see any of it getting prioritized anytime soon. The most important use case around notification handling that I'd like to see solved would be supporting a white-list of contacts that I want communications apps to notify me about when I receive messages from those designated people. Sounds simple and intuitive right? Turns out it's actually impossible to do with Apple products.
  • You can't disable the Phone app or its notifications. I use an app called Hiya to impede calls I don't want.

I find I'm much calmer, and in control now. I engage with apps and communication when I want to, not when they want me to. It takes awhile to get off the dopamine drip, but it feels great.

Sunday, January 20, 2019

Machine Play: Again

I updated my system for sending SMS notifications when my deck furniture’s covers get blown off in the wind. My original post talked about version one of the system. I’ve since made a handful of refinements to simplify things.
  • I ditched the Amcrest camera in favor of a Foscam. The Amcrest guys broke HTTP-Basic auth with a firmware update awhile ago, and the new auth setup doesn’t work. So, Amcrest essentially broke HTTP access to image snapshots. Foscam isn’t much better, and in some ways is much worse (auth creds passed directly in the URL), but they at least support HTTP image snapshots, and I, partially, solved the massive security issue by locking down all network i/o to my private LAN. The state of IPC cameras is abysmal; when did outdoor webcams get so bad?!?!
  • I removed the cloud server from the system entirely after I was able to get all of the correct packages/SDKs installed on my local Raspberry Pi; now everything lives local. With everything local, I was able to remove the annoying FTP server.
  • I rebuilt the data model on an order of magnitude greater number of sample images. And, in terms of image labeling, I went to two labels instead of the previous five to try and describe the various non-covered states. Evaluation/prediction is much better now.
  • I checked w/ the CloudFactory folks for image labeling, and they’re setup for massive scale (millions+) image labeling; my job was too small for them.
I also came across https://www.boulderai.com/ which is doing some cool commercial grade outdoor imagery work.

Wednesday, December 26, 2018

Machine Play

Preamble

The only really cool software things that have come along in the past several years have been a rise of the machines in the form of Machine Learning, and serverless/microservice/Kubernetes clouds. The former is a complete framework game changer for humans, and the latter just a really cool evolution of a concept that's been lingering since the dawn of computing.

Amazon's foray into the computer vision market was DeepLens awhile ago. I thought it was odd that they combined both the software and hardware (camera; note, the camera is only suitable for indoor use) ends of the product. Why in the heck would you include a camera with a product like this!?! Several months later the Goog released access to their genericized ML backend; Google Cloud Vision in the back of AutoML was cast unto thee.

Well, after using Google's Cloud Vision and a 3rd party outdoor webcam to cobble together a system that SMSes me when one of the deck furniture covers gets blown off in the wind (project breakdown below), I now see exactly why Amazon bundled the two. I only wish they'd provide an outdoor-grade camera version of the offering. It turns out, unsurprisingly, that the pics you feed the backend are such an integral part of the training/prediction process, that tightly coupling the two is very important (well done Amazon Product Managers who realized this early). I believe Amazon also does onboard model prediction which sounds cool, but I'm not getting the necessity of this feature. I think the Goog got it right by just offloading the evaluation bit to the network (their modus operandi obviously) via URL image data retrieval. Sure there are applications for on-board execution, but, they seem more specialized than most of the use cases will require.

The Project

I cover the outdoor furniture on my rooftop deck with covers to protect them from the elements (Colorado is pretty tough on the weather front). It's often really windy up there and the covers regularly get blown off. The problem is that I don't get topside as often as I'd like, and the furniture could be left uncovered for awhile, exposing it to the damaging sun. So, I wanted to get a notification when they were blown off so I could go re-cover them. Obviously traditional image parsing/recognition solutions would be horribly unreliable and hard to use in order to accomplish this, so, enter ML. I wanted an outdoor-webcam to take pics of the deck and have Google Cloud Vision determine with high accuracy (90%+) whether or not a cover had been blown off, and text me if so.

The Pieces

  • Input
    • Amcrest outdoor IP Camera - mounted outside and aimed at the deck furniture. the camera hardware for this is great, the firmware/on-board HTTP server/app is awful and stuck in the late 1990's. if anyone has experience with a better outdoor IP webcam (no, Nest doesn't natively work), please let me know.
  • Processing Nodes
  • Software
    • my app/driver - Python script (code is here) that cron runs every ten minutes on the droplet.
    • FTP server - vsftpd. the webcam's firmware design is as old as dirt and only talks FTP for snapshot images.
    • bash script that cron runs every ten minutes for copying/renaming latest image capture to the HTTP server so the main app can access it.
    • Google Cloud Vision - used to predict whether or not an image of my deck furniture has any of the covers blown off of it.
  • Output
    • Twilio API - used to send me a SMS message when a cover has been blown off.

Learnings

The process of building a model on someone else's engine is unbelievably simple. If you can imagine it... the computer can model it and predict it.

Labeling image data is a major pain and very time consuming. While model prediction/execution is super fast after you've trained it, the labeling process required to train is horribly cumbersome. Looks like someone's entered the market to start doing the hard work for us; I'll give CloudFactory a try next time I need to build a model (which is pretty soon actually given that my cover configuration has changed already).

We are going to accelerate from zero to sixty very quickly with ML backed image apps. I imagine app providers providing integration solutions with existing webcam setups that allow consumers to easily train a model for whatever visual scenario they want to be notified about (cat is out of food, plant is lacking water, garage door is open, bike is unlocked, on and on and on). Of course, you can apply all of this logic to audio as well. The future is going to be cool!

What Could Be Better


  • I should collapse the file/FTP server and the app server onto either the WAN based Droplet, or the LAN based Pi server.
  • The webcam. While the hardware is great, the software on the camera only supports SMB/FTP for snapshot storage. If the camera supported snapshot via HTTP I could forgo this interim image staging framework entirely. There might be joy in this forum post... I'll need to dig in and see; https://amcrest.com/forum/technical-discussion-f3/url-cgi-http-commands--t248.html
  • I need to format the SMS message to be Apple Watch form-factor friendly.
  • I need to reap/cull images after some duration.
  • As far as I can tell Google Cloud Vision data models can't be augmented *after* they've been trained. I'd like to add revised image data without having to rebuild/retrain the entire model. This seems like a pretty big bug. All of the image ML prediction scenarios I can think of are going to trend toward wanting to augment/add new image data over time without having to maintain the original seed model data.