#F_AI_L Never base a promotional campaign around a satire of the thing you're trying to promote. You'd think that's an easy rule to follow, but Huawei seems to have forgotten.
In what may be the most excruciating promotional stunt we've ever seen from a technology company in many years, Huawei based a promo for its AI on a satire of AI.
Huawei claims to have set out to train a phone app that would tell a cat from a dog, then test its accuracy by driving a car at a dog. Wouldn't you believe it? The app was a success – and the dog lived.
There are one just one or two things wrong with this picture.
Last year Mike Judge's HBO sitcom Silicon Valley satirised the hunger for, and the limitations of, machine learning. Jian Yang's Hot Dog identifying app (from "See Food Technologies") appeared to perform wonders – but the only thing it could identify was a hot dog.
Huawei's fake AI can tell a dog from a cat. But not much more. Too bad if you're a badger that happens to be crossing the road in front of a Huawei car. Or a lollipop lady. Lollipop or Not Lollipop?
As anyone with even cursory knowledge of the field knows, there is no machine learning in cars, because the two problem spaces are completely different. One (usually) requires millions of hours of training required to identify a static image, using statistical guesswork, and the guess can often be useful. The other is a speed challenge in a constantly moving 3D datastream, where safety is paramount. A bad guess will be fatal. There may be ML in your car optimising your route, or curating your Spotify playlist, but it will not be employed in safety critical systems.
Not only that, but the two camps distrust each other.
"Machine learning never works for the robotics people, and robotics people don't provide the machine learning people with nice clean data sets," one expert told us. "Robotics people who tried machine learning have moved on."
Nor is ML likely to be deployed. As deep learning daddy Professor Geoffrey Hinton argues, modern AI has hit a ceiling. The low hanging fruit may have been plucked. One example he finds frustrating is how limited it is. If Huawei's dog's head turned, even by a few degrees, the AI would have failed to recognise it. There would have been blood on the tarmac.
Neuroscientist and ex Uber man Gary Marcus explored many more reasons for why modern AI disappoints in this recent survey (PDF).
I can see how this creative idea was pitched, but I'm not sure the triage process on choosing the idea was everything it should have been. https://t.co/OYKC9GiGza— Rafe Blandford (@rafeblandford) February 25, 2018
It's mainstream, non-technical journalists (and sloppy think tanks greedy for a headline) who have yoked AI and autonomous cars together. They see two things are happening at once, and assume they must be related. Or even the same thing. They're not.
We'll know how good AI is the day a tech exec puts a person in front of the car, and sets the car off to drive at them without a human operator to override the AI.
That day has not come. ®