Recent Book — Hitler’s Thirty Days to Power

Hitler’s Thirty Days to Power by Henry 30daysAshby Turner Jr. I read this years ago, and was reminded of it recently. A very very good book and perhaps relevant read, the detailed story of Hitler’s precipitous rise to power, and how he was enabled by inaction or self-serving actions of the politicians around him. The idea that he would be held in check by more conventional politicians around him was a historically tragic error.

Worth reading. One of my all-time favorite history books. Worth reflecting on.

Weekend Software Project — Audio classification

This weekend I experimented with some audio classification tools. It was an up and down experience.

I’m interested in a couple features — hotword detection ala “Hey Siri”, “Alexa”; sound event detection (i.e. identify a glass break or gunshot); and acoustic scene classification. I didn’t dig into general speech reco, I’ve dabbled with that in the past.

I experimented with two projects this weekend — the Kitt.AI Snowboy hotword detection tool and the DCASE 2016 baseline system. I spun up a single docker container that hosted both projects. This was a bit of a PITA, mostly due to getting sound devices to show up in a container. I should post something separate just on that adventure.

Ultimately I got them both working. The Snowboy detector works reasonably well with their universal model; the personal models you can create work also, tho they are not speaker independent. The DCASE code also spins up and training can be done on a standalone machine in a modest amount of time. Unfortunately, both these projects have very restrictive licenses, which makes them kind of useless for anything besides a weekend project.

At the root of almost all these systems is a common feature extraction algorithm, MFCC extraction. MFCCs are explained reasonably well here and the author provides a python reference implementation with an MIT license. I’m inclined to dig more into this path going forward.

Re GMO or vaccines, I don’t think most people are anti-science.

What they are is anti-science establishment; they have lost trust in a science establishment that has allowed itself to be swayed by moneyed interests.

Consider today’s news on the sugar industry:

sugar_2xmacro…five decades of research into the role of nutrition and heart disease, including many of today’s dietary recommendations, may have been largely shaped by the sugar industry.

“They were able to derail the discussion about sugar for decades,” said Stanton Glantz, a professor of medicine at U.C.S.F. and an author of the JAMA paper.

There is just as much money swirling around pharma and around agriculture, you can bet that the science storyline has been distorted by this money. And that is what people are really saying when it comes to GMO or vaccines — they want to believe the science, but they can no longer trust the science establishment to tell an unbiased story. And science with a pre-determined agenda is no longer science.

We need a re-opening of science. We need far more transparency about funding. We need more funding independent of commercial interests. We need more open results — this EU proposal seems like a good thing. Above all, we need the science community to recognize the problem it has created, the loss of faith it has created through its own actions, and to take charge of healing itself.

Politics are no uglier or dirtier today than in the past — Same as it ever was

I see so many people bemoaning the terrible state of politics in this country. Some perspective:

In short, don’t despair about the horrible state of the politics. As Churchill said (tho he lifted the quote from someone else): Democracy is the worst form of government, except for all the others.

Weekend product trial — the Nucleus “anywhere intercom”

Another one of my rash kickstarter/indiegogo acquisitions, the Nucleus. I purchased the two pack so I could use them to connect to each other.nucleus-1

The units feel like cheap clunky ipads, which they kind of are, much cheaper than ipads. They only do a couple things tho — place a video/audio call to another unit or a phone running the app, receive a call, or act as an alexa device.

I played around with the unit standalone, as a paired set, and with the remote phone client. Some thoughts:

  • Way easier to set up than most iot products thanks to the screen, easy to set up the network.
  • As an alexa front end, it is adequate. It works just like an Amazon unit, seems to support all the same commands. The speakers are worse than the Amazon Echo or whatever you might attach to an Echo Dot. I wouldn’t use this as a music player, but it is fine for responding to voice queries. Now we already have an alexa in our kitchen and having two alexa devices in the same room is not a good idea.
  • As a receiver of calls from the mobile client, I kind of like having a dedicated always-on handsfree device. You can take a call in the kitchen without interrupting what you are doing. And having a dedicated device for the 75% most common call (ie spouse to home) is sensible. I can see some form of this scenario being useful.
  • As an initiator of calls to a mobile client, it is useless. I need to have the mobile app open and running or I can’t receive calls.
  • As a two-unit video intercom, I am not convinced. We do not have a small house, but I couldn’t figure out why I would put two of these around the house or where I would put them. We don’t have kids in the house, maybe I would like it more then. We do have a vacation place and so I placed one there so our two homes could be connected. However, internet connectivity at the vacation place is squirrelly and after a day, I could no longer connect to the unit at the vacation place. So that scenario isn’t working.

I haven’t thrown it all in a drawer yet like so many others I have trialled but it is not a part of our daily life yet either.

Recent Books — Corsair, Morte d’Urban, Fluent Python

  • Morte d’Urban by J. F. Powers. fp This is a slog. I am 50% of the way thru and promised comic masterpiece has yet to really appear.
  • Fluent Python by Luciano Ramalho. Bob recommended this book, and it is good, explains not just how to write good python code, but much insight into the internals of python which helps explain why you should do certain things in certain ways. I wish every programming language had a text this good.
  • Corsair by James L. Cambias. The story and characters are so-so, but the exploration of some of the societal, industrial, and defense implications of lunar mining was interesting, with the growth of private space exploitation companies we are likely to need to deal with some of these issues.

Re the last book, I’ve been reflecting on science fiction as a genre. In recent weeks I’ve been feeling bad about the genre as writer after writer dismisses it in the NY Times — Daniel Silva, Terry McMillan, Jeffrey Toobin. One can argue that I shouldn’t care about these opinions, but still. One aspect I think these writers miss is the ability of speculative fiction to explore the societal impacts of technology, as I note in the book above, and I find this to be valuable and interesting.

I was also feeling a little annoyed at these writers for their blanket dismissal of a genre, that didn’t seem very clever to me. I was happy to read Alan Moore this week, who just nails the key point that genre doesn’t have to be limiting at all:

I’m happiest when I’m outside it altogether, or perhaps more accurately, when I can conjure multiple genres all at once, in accordance with my theory (now available, I believe, as a greeting card and fridge magnet) that human life as we experience it is a simultaneous multiplicity of genres. I put it much more elegantly on the magnet. With that said, of course, there are considerable pleasures to be found in genre, foremost among which is that of either violating or transcending it, assuming there’s a difference, and using it to talk about something else entirely. Some subversions, paradoxically, can even seem to reinvigorate the stale conventions that they’d set out to subvert or satirize.

Every genre has bad writing, and bad genre writing is bad. I am going to dedicate myself to reading good genre reading, and especially writing that pushes the boundaries of the genre.

Recent Books — Multiple Choice, Station Eleven, Small Angry Planet, Playing Dead, Time Salvager, Truth, I Let You Go

  • Multiple Choice by Alejandro Zambra. From the text — “You read books that are much stranger than the books you would write if you wrote.” — and that about sums it up. This is not for everyone, but I happen to like books that play with the structure of books. But don’t blame me if you buy it and say “WTF is this? Why would anyone read such a thing or write such a thing?”
  • Station Eleven by Emily St. John Mandel. truth-and-other-liesSolid post-apocalyptic novel with better characters than typical for the genre.
  • The Long Way to a Small Angry Planet by Becky Chambers. A motley crew of space travelers slowly come to realize what family really means. Pleasant.
  • Playing Dead: A Journal Through the World of Death Fraud by Elizabeth Greenwood. Don’t do it, the insurers will get you every time.
  • Time Salvager by Wesley Chu. Eh, pretty forgettable space opera.
  • The Truth and Other Lies by Sascha Arango. Nice little tale of psychopaths, murder, plots.
  • I Let You Go by Clare Mackintosh. Twisty in a way I did not expect at all. Nice read.

Weekend software experiment — Facebook’s Deepmask

Facebook open-sourced its Deepmask and Sharpmask codebases last week for object segmentation in a scene. In the past I’ve tried traditional CV methods to do object masking, and it is kind of a disaster, you end up with a very finicky codebase full of heuristics and hacks, which fails as soon as it sees a new kind of image. Seems like an archetypical use case for neural nets. So seemed worth giving it a try.

Deepmask and Sharpmask come with pretrained models based on the COCO dataset, it looks laborious to label a new dataset without benefit of some mechanical turk-like process, so I decided to stick with the pretrained models.

To avoid polluting my system, and to share with the rest of the team, I decided to bring this all up in a docker container. Because our build environment has some unique characteristics, I needed to author my own container, but these were excellent dockerfile guides: Torch with CUDA and it’s dependencies. Also Torch install is a good reference.

29-Aug-2016-04-13-54

First results in the picture. Did a nice job on this image for the large objects. The biggest problem I had was running out of GPU memory. The machine I was using only had 4G of video ram and this constrains how large an image you can feed in. We had a bunch of 4K x 3K street images, I found that scaling them down to 756×567 allowed me to get them thru the classifier. My modified classifier that implements a size limit is here

The classifier is slow, seconds to tag an image. It would be interesting to keep it resident as a demon, and to just emit the metadata instead of a modified image, and see if I could get a higher framerate. Also I want to play around with both a faster video card, as well as maybe an opencl implementation to get to non-cuda platforms. This is also my first lua code ever, I have no idea which part of this code is fast or slow.

This was a fun little project. It points out that, if software is eating the world, then ML is eating software. For a certain class of problems, a fairly generic neural net plus a dataset is going to be competitive with the best laboriously hand-coded algorithm.

Weekend gadget review — the Cocoon — not recommended

This morning I set up the Cocoon I received this week — “Protect your whole home with Cocoon. Simple setup, HD video and audio, built-in siren, all controlled from your phone. Feel safe with Cocoon.”

Unboxing and setup — cocoon boxpretty standard box quality, nothing special about. Inside the box you get the Cocoon, usb cable and wall wart, some clunky international adapters, a stand, and a very small manual. While the manual was small, it was complete and clear.

cocoon elementsFor wifi setup the Cocoon uses an audio scheme — the Cocoon app on your phone blurts a song out which the Cocoon listens to for embedded wifi credentials. It was pleasant and fast, and way better than a bluetooth dance or a private wifi hotspot dance. Up and running in minutes, this promise was met.

Apparently the Cocoon will use location services and when it knows you are out of the house (ie when your phone is out of the house), will “arm” itself and start watching for movement and listening for sounds. It will then alert you to anything, and you can connect to see the video snippets, and set off the alarm if you want (or call the police or whatever). I didn’t exhaustively check all this out, I mostly just played with the video.

And, well, video as a data type is hard. So many bits. So opaque. The promised “HD video” was not really HD, was very laggy (my guess is they are using something like HLS or DASH which guarantees lag), was corrupted at times, had a low framerate, sometimes didn’t show up over the LTE connection. I’d say that the Canary I’ve used in the past had better video.
cocoon subsound
The Cocoon also has some feature called Subsound. I could not tell from the app screen what this was supposed to be doing. The website says it uses an infrasound microphone, geolocation, and machine learning. You can only see the app screen if you are at home. I guess this is to detect a broader array of movement or sound.

So…this is fairly expensive device. But with poor video. And a bag of other features which seem ok but not amazing. The economics on this thing are challenging to justify, if you want to see what’s going on in your home, a raspberry pi based system would be FAR cheaper tho of course you would have to futz with it yourself. The big downfall of this device is the poor video experience, at $400 I expect something pretty freaking amazing.

INTC’s ARM play and the ARM ecosystem

My best two investments this year have been NVDA and ARMH. NVDA’s been on fire, it was obvious when I attended their GPU Tech Conference that they were going to do very well this year. And Softbank’s play for ARMH has been rewarding.

All the growth in compute usage and capability has moved to non-Intel processor cores — GPU and ARM, both of which are better in different ways for the high growth compute issues of the day. Hard to imagine this changing, I’d expect to see even more GPU migration (or even on to FPGAs).

INTC’s ARM licensing move is a sign that intel is fully accepting the reality here. I am sure Intel still has a lot of internal religion about x86 instruction set — but they shouldn’t hold back now. They should fully cast aside any religion about instruction sets, and just make sure everyone is opting to put an Intel part everywhere. Put an ARM core on every x86 processor — they certainly have the transistors, it would be great as a developer to have an ARM core at my disposal on my laptop. Put an x86 core on every ARM chip, it would be great to be able to use x86 code in more places.

Softbank meanwhile may have just enabled their biggest competitor. It is great to have Intel as a licensee, but there is no way Intel is going to accept a secondary position in the processor market, they are going to use their design skills and fab skills to try to coopt the ARM market. Softbank will need to ramp up their own ARM ecosystem efforts, probably with a deeper software and services play, or they are going to find themselves with less and less influence.

Comcast, your Olympic coverage is making me feel like a dupe

Comcast must love our household. We have to be in the top tier of cable subscribers — multiple TVs, all DVRs, the most premium and sports channels we can get. Partly inertia, partly because we have been economically fortunate, partly because I want to get all the live HD football I can get. Comcast marketers must sit around salivating about us and wondering how to create more customers like us (or at least hang on to us).

So why is the Comcast/NBC Olympics coverage the worst possible option for viewing live Olympics? The 100M dash was what finally killed me. CBUT had it live. I could have found it on streaming live. NBC and Comcast in their infinite wisdom showed it at who knows what time last night. NBC had far more important things to show like prelim heats of some 800M race, also probably on tape delay. We found 5 minutes of worthwhile content in the first two hours of NBC Olympic coverage, thank goodness we could fast forward through it all.

By any measure the traditional cable NBC Olympics viewing is the worst possible option, which is so strange given the pile of money I give Comcast each month. The clear message is “If you want to watch live Olympics, you should not choose NBC or a cable subscription”. Comcast why would you treat your best customer this way?

The ARM IOT software stack ecosystem is kind of a jumbled mess

Google has announced Brillo which leverages Android and Linux. But no, wait, now there is Google Fuchsia because apparently Linux isn’t good enough. On the pi there are a jillion variants of linux to use, but with glaring holes (general purpose gpu, docker for instance). ARM has mbed tho it is really targetted at low end devices. You can try Windows 10. And all of these require a lot of laborious tinkering.

To say nothing of the cloud, the services for managing ARM devices are even more nascent or non-existent.

You have to wonder what Softbank is going to do about this. A $32B hardware bet needs a corresponding software and services bet.

UPDATE: Nat reminds me that I didn’t even mention toolchains! Cross-compilers, QEMU, etc — there is so much fun in here!

Interesting week for ML exits

Congrat’s to Seattle’s Turi, acquired by Apple. And congrats to Nervana and to Brad and Chris at Fuel, in what seems to be a great exit. The markups on ML companies are tremendous — not surprising given the dramatic impact ML is having.

I’d expect to see many more dramatic ML exits.

Nat nails one of the core premises of Surround.io

Nat, sharp as ever. One of the reasons we started Surround.io was to take advantage of the Moore’s Law driven wave of sensing technology. Since sensors are just carved out of silicon now, using the same process technology as digital electronics, sensors (cameras, accelerometers, mics, etc) are increasingly ubiquitous, cheap, and powerful. The challenge is software to process the flood of data.

Seattle’s role in the tech ecosystem

Lots of blather about this. Two salient observations by Charles and Eric below. As Eric notes, it is impossible to think about launching a startup today without technology from Seattle.