Animal electricity

Was inspired by TV again. This time it was a TVO program (originally BBC) called The Story of Science. It’s a clever show where, “Michael Mosley takes an informative and ambitious journey exploring how the evolution of scientific understanding is intimately interwoven with society’s historical path”. I’m thinking this TV thing might actually be useful. If i get inspired by Two and a Half Men, i’ll know i’m really on to something.

Anyway, Mosley was talking about “animal electricity”, a concept pioneered by Luigi Galvani. It was later to be rechristened bioelectromagnetism, but at the time Galvani truly thought that he was on to the source of life itself. And honestly, you can’t really blame him. The year was roughly 1780, and although people knew about it, electricity was still pretty much a mystery. You can cut the guy some slack if he concluded somewhat prematurely that this type of energy – which even the majority of people alive today don’t understand – held a higher place in the meaning of life than it actually turns out. From his point of view, if this electricity thingy could cause muscles to move, well, that kind of solves it, no? If you’re still skeptical, recall that Galvani – in his obvious enthusiasm – quickly turned his jumper cables towards cadavers, fully expecting that the bodies would leap up from the slab like Jason Statham in Crank (two hours i’ll never get back). I honestly think he was quite disappointed – Galvani, that is. I would have been too, because, seriously, i would have tried it.

I’ve mentioned before that my obsession with AGI causes me to make parallels with damn near everything that enters my brain, but this one caused a whole clan of neurons to fire. Remember when pattern classification was the very soul of AI? And then memory became the hot thing. Then learning, whatever the hell that was; but whatever it was, we needed it. But wait, what about inference? Hierarchies? Fuzzy logic? Bah! Fools! All you need is LOGIC (sans fuzziness)!

Well, we now know that Galvani, god bless him, was correct in that electricity is a necessary component of most life as we know it, but not a sufficient one. To really fulfill his Frankensteinian dreams he would also need to be intimate with any number of biological disciplines, not least of which is cellular chemistry. And likewise, most of us AGI folk ought to be on to the fact that any of the narrow AI approaches are likely necessary in some way as well, but, alas, insufficient on their own. To some degree this can be considered the Binding Problem. (It’s a leap, i admit. But i did say, “to some degree”.)

Probably the reason why this thought jumped out at me (oddly, from inside my brain) is that to an extent i’ve already been working on such approaches. I imagine others are too, but i claim that it was a completely original thought, having not actually heard of it anywhere else. It first started when i was working on predicting single value data streams, such as those produced in industrial control. I was trying to characterize “modes” of operation of equipment from the streams, and realized that such could be done if i measured multiple attributes of the values, especially over time. Example attributes would be instantaneous amplitude, boxcar amplitude, instantaneous change, volatility, sudden change of averages, etc. Independently, none of these attributes is normally of much use, but when they are converted into events (which interestingly start to look like neuron traces) they can start to be predicted. And when they are formed into temporal patterns and/or arranged into hierarchies it gets even better. Multi-modal prediction can occur by creating a hierarchical level that looks for patterns in the identification and prediction of events in the combination of individual streams. And then, those predictions can be arranged into hierarchies just like they are the lower levels.

Such arranging has its limits, of course. But it’s easy to make psychological parallels with this approach, so it remains intriguing. The problem i personally have, as i complained about in my last post, is finding decent data to work with.

Going off on a tangent for a moment, let me just say that i like MLComp and Kaggle. The latter especially so because they’ve formed a commercial structure around their stuff, which inevitably is going to be more resilient and spawn more ingenuity than open source (speaking from long experience here). But the big problem that i have is that their data sets are static. The algorithm has no opportunity to affect its environment. No matter what output it provides, the next set of data will always be the same. It’s like reading a book: no matter what you think of it or mutter to yourself, the next page will have all the same words it always had. Contrast this with a dynamic environment, where the actions of the agent can change what will happen next, like a choose-your-own-ending book. I hate to bang on it again, but this is exactly what GoiD provides. It allows the AGI implementations you write to have the element of action. And if you don’t think that’s useful, well, thanks for reading this far.

But back at AGI, the bottom line in my opinion is that diversity is good. Don’t bother looking for silver bullets – they don’t exist. Create small algorithms that do something useful well enough, and then find a way to arrange them such that an overall structure can develop confidence in the ones that work in particular contexts, and ignore them when they don’t.