2018-05-22

The LoRa hat for the Raspberry Pi showed up yesterday, so I guess I can started trialing LoRa for the sensor network. I still need to CAD out a board for the actual sensor work (something to fit the solar power kit, the Feather, and the sensors). Speaking of which, I forgot to mention that the Bluefruit battery experiment concluded, with the battery lasting about four days and four hours. The 500mAh battery is definitely overkill, but it'll be nice for those days where maybe there's a lot of rain or it's cloudy several days in a row.

The natural computing stuff has me thinking about how to represent memory. At first, and I'm just through chapter 1 of the book, I pulled together some ideas from the AI course. The thinking is that something like a FOL query engine with the facts stored in a neural network could be used. It would be built around facts, which are expressed as Is(Jet1, Plane) or At(Jet1, OAK). A fact could be a memory, which is stored in the neural network, or the result of some action. An action would be expressed similarly (with the addition of pre- and post-conditions):

Fly(Jet1, DEN,
    ;; Pre-conditions.
    [Is(Jet1, Flyable), Is(DEN, Airport),
     Not(At(Jet1, DEN))],
    ;; Post-conditions.
    [Not(At(Jet1, OAK)), At(Jet1, DEN)])

I could come with even more pre-conditions, but the general idea is there. One interesting topic about the post-conditions that I remember from the AI course is that I have to be sure to make the assertion not(pre-condition) in addition to asserting the mutable post-conditions. That is, flying doesn't (usually!) alter whether or not Jet1 is Flyable or that DEN is an Airport. It does, however, update the At condition; to capture that an object can be at only one place at a time, it needs to make sure to retract the previous assertion.

So… How to model this with a neural network? Why model this as a neural network? The answer to the second is to develop similar constraints that the human brain has; there's a fixed storage size that's much lower than what it could have with a standard query engine. Furthermore, a standard query engine only knows what it's been told, but it's maybe possible with a deep-learning network it could make inferences. I don't know, and honestly, I'm humouring the book's assertions that this is how memory should work in an attempt to broaden my understanding here. It could definitely be a dead end.

Another question is that of representation. The way I know of to interact with a neural network is with real numbers in the range [0,1], so one way to do that is to represent each piece as a 64-bit integer (e.g. via hashing); the relationships above all have a relationship type (e.g. Is or Has) and two subjects; this could be encoded as a 3-tuple of 64-bit integers. This yields a 192 neuron input layer, which could be connected to subsets of layers to provide some focus on each of the three pieces. As for output, a single neuron could be used to give us the confidence in the "truthiness" of a given fact. I'm really just brainstorming here; I've barely made a dent in the book, but thinking about these problems earlier seems to give me some better context.

Another thing, once again math comes up as the limiting factor; this time it's calculus, and it shows up less than a dozen pages into the second chapter. So that's going to be a problem. Fortunately, it's a surmountable problem.

This is my brain right now:

Curiosities


Tags: ,