While I envision a number of different modular reservoir-based blocks, including specialized ones for handling visual and auditory input, the main type of reservoir block I have been working on is the basic temporal memory block. The basic feature of this block is that it will learn temporal patterns or sequences of inputs.
- Classification: Associative memory should create a unified representation of similar input patterns. It should take input patterns that are very similar and create an single activation pattern that represents of this “class” of patterns. When presented with a pattern similar to the previous patterns, it should be able to “classify” the pattern as one of the ones in this class and its activation pattern should transition to the pattern representing the class. The temporal nature of this associate memory means that these patterns are really sequences of activations that change over time. The reservoir block must classify the sequence of activations. The unified representations of similar patterns or classifications of patterns have been described as attraction points in a chaotic system. Attraction points, by their very nature, are patterns that similar patterns transition towards and are stable. A similar analogy I envision is that repeated exposure to sequences of patterns erode deeper paths in the reservoir. Future patterns will have a tendency to follow these previously eroded paths and each path represent a different classification of patterns.
- Dimensional Spreading: One common feature of good associative memory is that if can map simple input onto larger dimensional representations, thus increasing the effective “distance” between different patterns. The idea is that by using a richer representation of the patterns, the system can highlight the differences between patterns making it easier to classify them. Where I can see single reservoirs doing this and have read papers to this effect. I also think combinations of reservoir blocks will be able to do this in certain ways. This spreading out of the representations of the different categories is in conflict with classification which forces similar pattern together. One way to think of this is that a good associative memory should create a non-linear mapping between patterns and their representations. Similar input patterns should be combined into a single representation in the reservoir while the dis-similar input patterns should be Representative by even more dis-similar reservoir patterns.
- Unsupervised Learning: philosophically I dislike supervised learning. While is serves and important role in many applications, I want to create a system that can learn without supervision. Like any youth, it should be able to detect similar patterns of input and learn to classify them. It should also be able to observe its own output and compare that to other input patterns and learn to create output the mimics the inputs it receives. I think this can all be done will unsupervised learning based on the standard Hebbian model.
- Self-balancing: to be functional, I think a reservoir block must be able to adapt to different levels of input automatically. I break this down into two components:
- Sedation: The reservoir block should automatically detect when their activations rich to high a level and use inhibition to bring the activations down. This can occur when reservoirs receive multiple inputs from other blocks or a loop of blocks create a feedback loop. Currently I am implementing this is a separate set of sedation nodes in each reservoir block. The sedation nodes are activated by the nodes in the make reservoir section. As the sedation nodes become active, they inhibit the nodes in the reservoir reducing the activation.
- Stimulation: The reservoir should “amplify” patterns with small activations. This serves a few critical roles. First, when learning patterns, a block’s weights might not be strong enough to activate nodes in the reservoir. By stimulating the nodes in the reservoir, some of the nodes receiving the strongest inputs will become activated. Slowly over time the weights will strengthen and the stimulation will not be needed to activate the nodes. Second, if a block is receive inputs from two other blocks during learning, its weights will grow strong enough to activate the reservoir nodes when both inputs are being received. Later, if one of the inputs is missing, the reservoir will need stimulation to activate the nodes in the reservoir. Similar to sedation, stimulation is currently implemented be a separate set of nodes. These nodes have a positive bias and will become activated with out any input. The are connected to nodes in the reservoir providing positive activations to the reservoir nodes for stimulation. All the nodes in the reservoir inhibit the stimulation nodes, so once some nodes in the reservoir become active, their inhibition will prevent the stimulation nodes from activating.