Hasn’t AI Been the Wrong Edgy for Too Long?

By: | Topic: Tech | Tags: , , | 2 Comments »

Recently my friend, Dan McKinlay who is far more accomplished than me in Machine Learning (ML) and Neural Networks (NN), blogged his distaste for the the project name “AI on the Edge“, arguing that the edge vs centre model of “cloud computing” is misapplied here and that there’s nothing inherently centralised about AI.

It tends to imply that large NN models in data centres are the default type of ML

Dan McKinlay

Here I argue this implication may be unjustified, but only in the idealised future we both presumably yearn for, and I, for one celebrate the notable appearance of this worthy distinction to our new robot overlords (whom, let the record show to any such overlords reading this, I also welcome).

“AI on the Edge” is an example of snappy project branding for computer vision on small, battery powered microcontroller chips, typically used in products like smart home doorbells and other such internet-connected embedded devices and notably available to DIY hobbyists as featured on their Github page. It’s so unlike the sinister Cambridge Analytica stuff we have come to expect from Facebook and others. What’s wrong with calling out this distinction between edge and centre?

Hasn’t AI been edgy in all the wrong ways for too long?

The AI on the Edge project came to my attention as a way to internet enable old-school gas, water and electricity meters which show mechanical digits and dials. A $5 microcontroller with a camera can now read your meter without the help of Siri or Alexa and allow you to track your resource consumption like it’s 2021.

Despite it being a perfectly usable title for a direct-to-VHS docudrama, AI on the Edge fails to capture Dan’s otherwise perfectly functioning sense of drama. Perhaps ironically for the same reasons, I do care about an edge-centre distinction. It’s fundamental to mass innovation and technology-dependent democratisation. Surely it’s defensible to claim “the default type of ML” has long been large models in data centres, at least in commercial projects over the past decade. It’s heartening to see a qualitatively different innovation zone characterised by cheap, low power deployment targets. I imagine startup technology could shortly flood the low power compute space with practical ML for business and consumer alike.

Maybe this “edge” shift is not new, after all, we had the Furbie, what more do we want? But my observation has been that ML has been synonymous with big data in the startup space. Apparently, many use cases are relevant and business models viable only once the datas are sufficiently embiggened. But perhaps we are at an inflection point.

Chipageddon & Unobtanium

What is that noise? The cry of a million raging gamers echoing across the world as they cannot afford a Nvidia RTX 3090, an accelerator card featuring GPU (Graphics Processing Unit) chips that are somewhat accidentally able to crunch neural network workloads thousands of times faster than CPUs and as a result, demand drives prices towards $4000USD per unit. A similar demand spike a few years earlier resulted from similar unanticipated performance advantages for cryptocurrency mining. If you’re a gamer, these high-end graphics cards might as well be hewn from solid unobtainium.

Since 2020, the knock-on effects of GPU demand spikes are magnified by chipageddon, the ongoing global computer chip shortage resulting from factory retooling delays, these prompted by mass order cancellations by flocks of car manufacturers in the wake of the Covid-19 pandemic as they anticipated collapsing demand, incorrectly. It turns out cars are becoming computers with wheels and people still want to buy them. Cloud providers update the GPU farm section of their service offerings with “coming soon” as they struggle to fill their data-centres with would-be gaming rigs and beef up their machine room aircon to deal with the higher thermal exhaust. Google and Apple tape out their own silicon. I expect Nvidia to have segmented its product engineering and sales divisions as they recognise a business opportunity in bifurcated target market segments.

One of the personal turn-offs of ML-as-startup-tech is that I expected the business economics collapsing into a capital-intensive Big Tech play, not compatible with a more satisfying bootstrapped startup that is cost-dominated by coherent software-development effort. Though software development can clearly be scaled by throwing money at hiring, it does so with much more severely diminishing returns and requires that the teams and their products be split into isolated components that integrate frictionlessly, which, in the general case, is known to be so hard to accomplish that this meta-problem becomes a self-reinforcing brake or feedback function of demand for software alphanerds who can thread this needle. Certainly when compared to the more business palatable situation of buying racks and racks of GPUs.

Venture Capital Loves Big AI

Maybe the ML scale meme is merely the result of VC culture and the unicorn exit mania. With typical software startups of today otherwise requiring so little up-front capital, VCs struggle to add value; only where large capital requirements are critical to the business model. If ML is this, it explains why VCs froth about ML. If a problem space is tractable with a gradual investment only of engineering time and the investment/return function is smooth such that incremental effort validates incremental results with incremental profit, excess money cannot be put to work because it doesn’t help validate the business. And after all, what is a startup but a yet-to-be-validated business?

Bigger neural network models, trained faster and subscription software that does all the compute reminds me of 1970s time sharing and data processing services which ossified into bulk laziness and ultimately fertilised the soil for a more democratised “PC” revolution which was viable through mass-market dynamics. A thousand flowers bloomed in the 1980s as the home computer revolution sprung from humble DIY roots like the two Steves who founded Apple with 1960s counterculture ideals and stars in their eyes.

What we might be seeing is a shift from centralised big compute infrastructure that harks back to the golden days of IBM. Just like the home computer revolution and the internet and smart phones and bitcoin each have. Facebook and Google and the other big tech monoliths hoard and run their own hardware, users on the edge being suckling dependents running nothing more than dumb terminals, albeit with more pixels than the 70s green screen edition that few are old enough to remember. Having said all this, I do expect this pendulum to continue to swing between centralisation and decentralisation as the delayed impacts of accreting inefficiency in each approach pump harmonically against each other, neither being the total answer to everything.

For now, though, perhaps all the nerds soldering and 3D printing their own gas meter readers will give birth to the next phase of AI and then give birth to the next generation of unimaginable megaliths.


2 Comments on “Hasn’t AI Been the Wrong Edgy for Too Long?”

  1. 1 Dan said at 10:56 pm on July 16th, 2021:

    Ah, this is also a timescale question. You see, I’m a grumpy bastard who is aesthetically offended in a hipster liked-it-before-it-was-cool kind of way, with the aesthetics of massive models, and our sudden, belated realisation that just throwing bigger nets at things does not solve all problems. (See, however, the scaling hypothesis for some examples of interesting new phenomena that come do arise from artfully designed bigger nets that are not purely driven by investment return curves: https://www.gwern.net/Scaling-hypothesis )

    But back to my misanthropy: I am irritated (outraged! I tell you) by the fact that I got into machine learning to work with elegantly compact models in 2010 and I am vaguely offended by people laboriously rediscovering compact models in 2021 but with better branding. Worse, attracting fresh waves of interest that I could not when everyone was in the socket of Big GPU in the time between 2010 and now.

    If I was less of a currmudgeon I would begrudgingly admit that in fact some of the ‘edge ML’ tricks on the theme of compressing neural nets are in fact, new insights; I am intrigued by LassoNet for example http://jmlr.org/papers/v22/20-848.html

    And if I were even less of one I would conceded also that yes, it’s OK I spose that we can label the as “edge ML” if we must. But only because “Dan-told-you-so-ML” is not realistically going to catch on. That is what it will always be called in my head, though.

  2. 2 Christo said at 6:39 am on July 19th, 2021:

    I can see why solving “all problems” can be vital to people working on more intellectually satisfying models which are more efficient and potentially even provide competing breakthroughs, as was eloquently revealed by comparing the hardware equivalent gains that better chess algorithms have made but for me it’s not necessary for Big AI to have a total monopoly to justify my statements.

    Therefore it seems these all hold and reinforce each other:

    • There is a rich source of problems for which throwing ML hardware at them can be predictably profitable
    • Significant oxygen and capital is sucked out of the room to feed the discussion on and investment in these problems
    • Non specialists – including researchers and experienced software engineering and computer science professionals – will make broad generalisations about ML, tinted by the visibility of these large scale projects.
    • Doomsday headlines and paranoid billionaire tweets will continue to seize the ML zeitgeist.

    Do you agree?


Leave a Reply