This is one of those blogs where we get all excited about machine learning.
Fortunately, getting all excited about machine learning (ML) is pretty easy. All it takes is to google for pictures with the tags “machine learning” or “AI” and then adore high-resolution images showing robots, stylized human brains, randomly placed circuit boards and/or cog wheels. Usually in shades of blue. These things have little to do with ML and have more in common with technology magic, but honestly, does anyone really care?
In a short and precise blogpost the tech-entrepreneur Marco Varone calls bullsh*t on years’ worth of marketing hype around ML that lead to overblown expectations on its business value. Though potentially powerful, making self-learning algorithms into a product (one that people actually want to buy) comes with a whole host of challenges that need to be addressed. The worst offender on this list, hands down, is getting sufficiently good training data. A lot of it. A real lot. Yet, somehow, in our experience, people who want to do ML beyond the standard MNIST character recognition get repeatedly surprised by the demands placed on training data. Many articles like this of Marco have been published, yet things haven’t really changed and ML is hyped as ever. Never mind Microsoft’s Hitler-bot. Or rather peculiar ways to solve the data-issue.
If there really is no relationship between ML knowledge and having a successful career in “data engineering”, then we feel we are in an excellent position to start this blog. After all, we seek not only to celebrate all the nice things ML bestows upon humanity, but try to make a decision whether self-learning algorithms are a good investment in a given business field or not. To be clear: we are intrigued about the many varieties of self-learning algorithms and their potential use-cases. Yet, we feel it is important give a fair amount of attention to problems that are known to be major headaches in various industries and then see if and why some sprinkling-in of ML might help it.
Frankly, it is quite hard to say whether any of the ML techniques is actually worth applying to a given problem or not. The factors that influence the decision to say “yes” or “no” are often not ML-related. Imagine, for instance, an elevator that is operated with voice command rather than buttons, complete with recurrent neural networks and microprocessor for each elevator. A “innovation” like this is certainly worth of the technological progress of the 21st century, yet I would argue that putting old-fashioned buttons there might still be the better option... economically speaking. Buttons are cheap, they do a good job and they are not a problem for anybody. And this is fundamentally what we care about in this blog. Is it economical to apply ML? Does it work well enough to change entire processes? Is there a way to use ML in a role that does not cause too much disruption, so as to ease implementation in a given industry?
In a series of upcoming articles we discuss the economic value of ML and try to identify key-fields that might profit from self-learning algorithms that do not look too obvious, yet. Perhaps, in the future, an intelligent agent will be much better at identifying and predicting use-cases for ML. Until then, though, we feel that is worth doing it ourselves.
T. Kerst, Recon AI