TIGoRS in Complex Systems: An Encoding Mechanism for Situated Cognition


William Sulis

McMaster University, Depts. of Psychiatry, Psychology & Computer Science
21 September 1997


Abstract

There has been considerable interest during the past decade in neurophysiological models of mental representations. These models have frequently suffered from problems of stability, lack of robustness under external inputs and noise and a lack of real time implementation. A more general phenomenon, transient induced global response synchronization (TIGoRS), has been demonstrated in a variety of complex systems models, including cellular automata, cocktail party automata, tempered neural networks and coupled map lattices. In TIGoRS, an external transient stimulus induces a clustering of the resulting output patterns within a small region of the pattern space. In the case of cocktail party automata, such TIGoRS occurs maximally under conditions of asynchronous operation and noisy sampling of input stimulus, conditions prevalent in natural complex systems. Unlike traditional models, the activation of TIGoRS based neural code requires a dynamic interaction between the system and its environment and thus could subserve information processing in situated cognition models. Unlike in cognitivist models, information is implicit rather than explicit.

Introduction

One of the deep issues in psychology and the neurosciences concerns the elucidation of the mechanisms through which a meaningful correspondence is established between the environment and a behaving organism. The simplest understanding was based upon the concept of the reflex arc, that is, the existence of a more or less direct connection between a stimulus and the behavior which it elicits. This simple model quickly gave way to behaviorism, in which the direct reflex connection was replaced with a more complicated chain of intermediary connections. Complex behavioral responses could be generated by varying the number and efficacy of these connections. As behaviorist explanations became increasingly baroque, this model eventually gave way to cognitivism, in which the intermediary was no longer a physical link but instead an abstract mental object termed a representation. While providing a more parsimonious explanation than behaviorism, the idea of representation lacks neurophysiological grounding. It is not at all clear what exactly constitutes a mental representation, nor how this representation is implemented by the brain. Moreover, while the concept of mental representation has proven to be fruitful in the study of those aspects of human behavior which involve the use of language, it has proven to be woefully inadequate for dealing with those aspects of behavior governed predominately by affect and physical skill, as well as many aspects of perception. Cognitivism posits that behavior is generated internally through the internal manipulation of these mental representation. An external stimulus activates a collection of these representations, which are then subject to internal transformations, the end result of which is the generation of some behavioral act. These transformations can be studied abstractly, independent of the organism which supports them, and thus cognitivism has given rise to the field of artificial intelligence (M. Posner, 1989).

In recent years attention has turned to the study of nonhuman intelligence, and skill based behavior. This has produced a new paradigm, that of embodied and situated cognition. It has now become apparent that many cognitive processes exhibited by nonhuman animals depend critically upon the physical characteristics of the animal's body, and upon its environment (H. Roitblatt & J. Arcady-Meyer, 1995). It is no longer feasible to view cognition as occurring solely within some abstract realm, internal to the organism, independent of the body which supports it, and independent of the environment. The environment is now seen as an integral component of an animal's cognition. Although strongly suggested by experimental evidence, such a view of cognition raises deep questions about the nature of mind. While cognitivism gives rise to the concept of cognition as an algorithm acting upon mental representations, it is an open problem how to conceptualize the situated cognition view of mind as a nonrepresentational, emergent phenomenon arising out of the interaction between the brain and its environment.

The goal of this paper is to shed some light upon this problem by focusing upon one particular issue in the theory of representations, namely the problem of stability.


Neural Codes

Early theories concerning the neurophysiological basis of mental representations assumed a one-one correspondence between individual cells and environmental objects or events. This `grandmother cell' hypothesis was quickly discarded and replaced by the idea of a cell assembly, and generalizations of this idea persist to the present (Aertsen and M. Arndt, 1993, R. Eckhorn, 1988, C.M. Gray, 1992; R. Pfeiffer, 1966; E. Vaadia et. al, 1995). Cell assemblies are presumed to reflect information in their patterns of activity and individual cells are free to participate in many different cell assemblies. Cell assemblies are dynamic entities which pass information to other cell assemblies by virtue of their neural connectivities. Since neurons appear to be markedly restricted in their behavioral repertoire, limited mostly to the generation of spike trains, and since they possess multiple efferent and afferent connections, it is presumed that the informational content of a neuron's activity is expressed in its pattern of spike trains. Different spike trains correspond to different messages. Support for this notion came from the discovery of neurons having firing rates which appeared to be selectively tuned to specific features of stimuli such as auditory frequency (R. Pfeiffer, 1966) or, in the case of vision, line orientation or direction of motion (D.H. Hubel and T.N. Weisel, 1968). The neurophysiological basis for mental representations is thus thought due to the encoding of information in neural spike trains.

Encoding by means of spike trains is not without problems. Shadlen and Newsome (M.N. Shadlen and W.T. Newsome, 1994) obtained a series of 210 spike train recordings from a single cortical neuron in extrastriate visual cortex in a macaque monkey in response to repeated presentations of an identical pattern of dynamic random dots. The resulting raster diagram reveals a clear pattern of systematic variation of response manifest as an impression of vertical contours throughout the raster. Although the instantaneous firing rates varied reliably over a range of 80-100 Hz, the actual pattern of spike trains generated in any given run provided only a very noisy estimate of the rate pattern. Moreover the timing of individual spikes appeared to be random, having the characteristics of a non-homogeneous Poisson point process (G. Gerstein and B. Mandelbrot, 1964). The only reliable correlate to the stimulus was the pattern of instantaneous rate.

Rate encoding suffers from serious problems related to insufficient bandwidth. A precise temporal code in which the precise timings of individual spikes would encode information would be preferable since the information carrying capacity would be significantly enhanced. Such an interpretation meets serious opposition unless the apparent noise in the spike trains of individual neurons is reinterpreted as part of the information being carried. This has led to an alternative proposal in which each individual spike train is presumed to arise, not as a result of random variation, but instead as a result of complex interactions between the individual neuron sampled and a host of other neural systems. The underlying dynamics is presumed to exhibit deterministic chaos so that the variation in spike trains is a result of subtle variations in initial conditions as well as concurrent processes during the presentation of the stimulus. An in vitro experiment has demonstrated the feasibility of this using Poisson spike trains (Z.F.Mainen and T.J.Sejnowski, 1995). Unfortunately in order for this to hold, it is necessary to presume that information processing takes place at the dendritic level and requires properties of the dendritic membrane which are not supported by experimental evidence (W. Softky and C. Koch, 1993; W. Softky, 1994)


The Stability Problem

A neural code, regardless of its nature, must meet several other conditions in order to be plausible as an encoding mechanism to be utilized by biological neural systems. Such encoding much be robust against synaptic noise and random neural drop out. Synaptic transmission is a noisy process. The reliability of synaptic transmission appears to be variable and mediated by such factors as LTP and LTD and quantal neurotransmitter release (C. F. Stevens and Y. Wang, 1994). The response of the postsynaptic membrane in terms of the MEEP induced by a single quantum of neurotransmitter can vary as much as 40 fold (A. Mason et. al., 1991). Neurons die at a steady rate throughout life yet little impact is noted at the behavioral level until such neuronal loss is well advanced either through advanced age or illness.

It is reasonable to assume that there should be a stable correspondence between the encoding of a stimulus and the stimulus itself. This appears to be the case with rate encoding but in the case of temporal encoding one must generalize the `stimulus' to include the totality of the external and internal environments of the organism and not just the stimulus applied by the experimenter. Consideration must be paid to how neural systems further downstream can read or interpret the encodings of neurons in primary sensory areas. Consideration must also be to paid to explaining how prior encodings are to be preserved in the face of subsequent learning. This problem is further compounded by the fact that functional rewiring can take place even in the absence of learning. Studies of the stomatogastric ganglion of the lobster have demonstrated that the dynamic properties of the ganglion change as a result of changes in the neurophysical and neurohumoral environment. This results in different neurophysiological properties for the component neurons and radically different activation patterns for the assembly as a whole cite. Thus the response of a neuron may depend not only upon the external environment but also upon its internal environment.

Even if the internal environment appears to be relatively stable, neuronal responses show considerable metastability. Place cells (J. O'Keefe and J. Dostrovsky, 1971) are neurons located in the CA1 region of the rat hippocampus which show tuning of their firing rates in response to the animal being located in specific locations in the physical environment which it inhabits. Using permanently implanted electrodes, McNaughton et.al. (B.L. McNaughton, 1996) have been able to track the neural response patterns of a behaving animal or several days. In a radial maze, Jung and McNaughton (M.W. Jung and B.L. McNaughton, 1993) observed that the place fields would sometimes appear rotated relative to a previously stable orientation. Quirk (G.J. Quirk et. al., 1990) noted that rats introduced into an environment in darkness sometimes exhibited a new distribution of place fields, completely different from that observed during previous sessions in the same environment under illumination. These new fields often persisted after the lights were turned on. Bostock (E. Bostock et. al., 1991) studied rats trained in a cylindrical environment in which a cue card was used to break the rotational symmetry. When the cue card was changed from white to black, the place fields underwent a random rotation on the first trial, and then were completely different on subsequent trials. An even more striking example (B.L. McNaughton et. al., 1996) involved animals trained in a particular track for more than one week, who were then exposed to the familiar environment, then exposed for 1 hour to a non familiar environment and then returned to the original environment. In many cases the place fields upon return were completely rearranged.

Thus even if the spike trains of individual neurons can be understood as an encoding of information which is being manipulated by this system, whether through rate or some other representation, it is still not possible to understand what information is being represented by that code without a knowledge of the immediate past and present context in which the animal is situated. This provides a deep problem for the animal since it is not at all clear how downstream systems are to know what meaning to attach to the output of such a neuron. Although neurophysiological evidence is lacking, it is conceivable that the metastability of the hippocampal place cell system extends to other cortical systems as well, including memory. In that case, memory itself would be a context dependent, metastable process. Memories would not be stored in an invariant form but would be created on the spot given a specific context. That this is likely has been suggested by research into the nature of autobiographical memory (U. Neisser, 1987). This in turns poses a serious challenge for those models of memory which posit a representation of a memory in terms of some static property of the system. Thus a grounding of mental representations at the neurophysiological level appears to a rather dubious proposition. Instead, the evidence suggests that we should look to a higher ontological level on which to ground the concept of representation.

One possibility is that the proper setting in which to study neural representation and coding is, paradoxically, that of the whole animal-environment context. It is only at this level that meaning can be attached to neural events. It is not a priori necessary that the metaphor of representation at the animal-environment (mental) level be extended to the neuronal level. Any neural process which facilitates an adaptive relationship between the animal and its environment warrants consideration, regardless of its mode of implementation, be it computational or otherwise. Considerations of this sort lead to the concept of computational competence.

In several papers (W. Sulis, 1993, 1995) I have argued that in order to understand the nature of computation in complex adaptive dynamical systems such as the brain, it is first necessary to make a distinction between computational competence and computational performance. Computational competence refers to the ability or capability of the system under scrutiny to carry out the computation under consideration. In order to address this question it is first necessary to specify, in advance, the environmental context in which the computation is to take place, the specific features of the environment which will be manipulated by the observer and comprise the initial data as it were, the behaviours to be exhibited by the animal which indicate that the required computation did indeed take place, and the duration within which this task is to be performed. Since it is never possible to fully constrain an environment and no animal is ever so constrained that it can exactly reproduce a given behavioral sequence, it is necessary to specify the behavioral constraints on the environment and the animal separately as sets of behaviours, each behaviour having a predetermined duration. The experiment then consists of specifying some set of initial conditions and the environmental constraints to be induced by the experimenter and then allowing the animal-environment system to evolve over the allotted time and finally observing whether or not the subsequent behaviours of both the environment and the animal lie within the previously assigned sets. If they do, then it is possible to assert that the requisite computation took place. If not, not. Computational competence is a form of local stability of response. The set of initial conditions under which the response to a specific stimulus is stable corresponds to a particular cognitive set. A search for computational competence is thus founded upon the presence of at least local stability in the responses to a given stimulus. Consideration of this question leads naturally to a consideration of the stability properties of complex systems under non autonomous conditions. Moreover, this question is naturally framed in nondeterministic and probabilistic terms.


History of Investigations into TIGoRS

Concerns about the stability of spike trains in models of artificial neural networks led to the discovery of transient induced global response synchronization (TIGoRS) (W.Sulis, 1992, 1993). TIGoRS occurs when a transient stimulus applied to a dynamical system produces a set of responses which cluster closely in pattern space. Assume that a suitable metric r has been imposed on the transient language. In order to avoid an erroneous attribution of clustering to TIGoRS when actually due to mere statistical coincidence, we say that a stimulus h induces TIGoRS if, given any two initial histories y ,y ' it follows that:

r (yh , yh') < 1/2 r |rand (yh)| , |rand (yh')| , where rand (y h) and rand (yh') are randomly generated patterns of same norm as y ,y '.

TIGoRS has been demonstrated in tempered neural networks, coupled map lattices and in cellular automata. Simulations carried out using homogeneous, local, 2 state, 3 neighbour cellular automata demonstrated that fixed random patterns induced TIGoRS as a function of the input rate and the symmetry class of the receiving automaton. The symmetry class reflected the dominant symmetry present in the autonomous patterns produced by the automaton. The classes are uniform, linear, complex and chaotic (W.Sulis, 1995) and were shown to be distinct from Wolfram's now classical classification scheme.

In order to address issues of synchronization, adaptation and stochasticity, a more detailed study was conducted using the cocktail party automaton (W.Sulis, 1995). This is an adaptive cellular automaton which can be controlled for the degree of adaptive response, inhomogeneity and asynchrony. Each cell is provided with both a state and a rule. Updating can be done either synchronously, or via a fixed asynchronous scheme, or via a stochastic asynchronous scheme. The state of the cell is first updated, then any input is applied to the cell according to the particular input mode. The rule of the cell is then updated. The rule can remain fixed or be updated according to the following adaptive scheme. Whenever the state of a cell is updated, a comparison is made between the response of the cell and that of all other cells possessing the same local neighborhood state configuration. The difference in the number of cells disagreeing and agreeing is calculated and the cell modifies its transition table entry to the opposite value if this difference exceeds a predetermined, individualized, fixed threshold. The cycle is then repeated.

Each input to the host automaton was a complex spatiotemporal pattern derived from the output of a second, input automaton having an identical lattice structure as the host. A fixed correspondence was established between the input and host cells. This provided a mapping between the output pattern of the input automaton and the cells of the host. The input pattern thus consisted of an array of state values, indexed by cell and by time. At time n for the host automaton, the row of the pattern corresponding to time n was sampled cell by cell and applied, according to the input mode, to the corresponding cell of the host automaton. The input automaton was chosen at random using varying combinations: homogeneous/inhomogeneous, linear/complex/chaotic rules, synchronous/asynchronous, fixed/adaptive.

Two input modes have been studied in detail. In the inferential mode, a single complete output from the input automaton was stored as the pattern. This was then applied to the host automaton as follows: the pattern was sampled but this time the cells of the pattern which were to be input to the host automaton were chosen randomly, at a fixed rate. Given a chosen pattern cell, the corresponding cell in the host had its state changed to match the pattern cell only if the pattern value was 1. Each presentation of the stimulus thus varied between trials, constituting a distinct random sampling of the original input pattern. A matching of responses under these conditions reflected the ability of the automaton to respond to the overall structure of the pattern which must be inferred from the random samples presented.

In the recognition input mode, a single complete output from the input automaton was stored as the pattern. This was applied to the host as follows: the pattern was sampled at a fixed rate. This time, given a chosen pattern cell, the corresponding host cell had its state changed to match that of the pattern cell, regardless of value. Each presentation of the stimulus again varied between trials, constituting a random sampling of the original input. The automaton was studied for its ability to match its response to the pattern. Thus its capacity for pattern completion was studied. This provides one means of pattern recognition, hence the choice of the descriptor: recognition mode.

In (W.Sulis, 1995) it was demonstrated that for inputs presented under recognition mode, TIGoRs occurred best when the automaton was fully adaptive and asynchronously asynchronous. Thus TIGoRS occurred maximally under the same conditions which are observed in living neural systems. The patterns generating the greatest degree of TIGoRS were either uniform or fixed point patterns, quite distinct from the autonomous patterns which generally consisted either of domains of complex patterns separated by narrow walls of fixed point behavior, or consisted of homogeneous regions which appeared to wander randomly over the field. The introduction of fixed elements has little effect unless there is 100% synchronization in updating. In that case as little as 25% seeding allows the automaton to recognize patterns from the same symmetry class as the fixed elements. Such recognition is rare in the absence of such seeding. In addition it is important to note that the configuration of rules in the lattice changed at each time step, and differed for each run in spite of the fact that the resulting spatiotemporal state patterns remained close in Hamming distance. Thus there was a disconnection between the rule and state levels, with stable behavior occurring at the state level in spite of nonstationary behavior occurring at the rule level. Thus we had found a mechanism for pattern retrieval which occurred at a global, pattern, or ``mental'', instead of a local, state, or ``neurophysiological'' level.


TIGoRS as Code

Although the previous study demonstrated the existence of globally stable patterns of response between certain stimulus patterns and the cocktail party automaton, the use of the recognition mode for inputs limited its validity significantly. In natural neural systems, the sensory apparatus responds entirely to stimulation, so that the inputs to the system should correspond solely to activations, and not inhibitions. Thus the inferential mode provides a much more stringent test condition. Moreover it was not clear from the recognition studies whether it was possible to produce differentiated TIGoRS responses to an input, since all of the responses would cluster around the input pattern. It is clear that a neural code or representation system based solely upon veridical images of external objects does not exist within the brain. Thus although the recognition studies demonstrated the existence of a nonrepresentational, generative, context dependent associative memory, they did not provide convincing evidence that TIGoRS could be used to support a neural code.

Thus a new set of studies were carried out using the cocktail party automaton with inputs presented under inferential mode.

Eight patterns forming a continuum of decreasing symmetry were considered. These were obtained from the 2 state, 3 neighbour rules 96, 140, 123, 24, 26, 106, 22, and 45 and run the gamut from uniform, to fixed point, through to pseudorandom. These same rules were used to provide a seeding of nonadaptive cells in an attempt to introduce diversity into the responses.

The cocktail party automaton was simulated with seedings of 0%, 50% and 100%, an input rate of 10%, and synchronization rates of 0%, 50%, and 100%. For each combination of seeding class and pattern class, 10 runs were simulated and the mean Hamming distances between responses and between the responses and the input patterns were determined. Figure 1 shows the percentages of runs having a mean Hamming distance less than 15 (response-response/ response-pattern) indicating the presence of TIGoRS.



 

Pattern Class

Rule 0 1 2 3 4 5 6 7
0 60/60 0/10 10/0 40/70 10/10 0/10 0/10 0/0
1 66/0 10/20 10/0 0/10 0/0 20/0 0/0 0/0
2 40/0 10/0 30/0 30/0 0/0 30/0 10/0 0/0
3 07/0 10/0 10/0 0/0 10/0 0/0 10/0 0/0
4 30/0 20/0 20/0 10/0 0/0 20/0 20/0 20/0
5 40/0 20/0 10/0 20/0 20/0 30/0 30/0 20/0
6 60/0 10/0 10/0 0/0 10/0 20/0 10/0 0/0
7 40/0 10/0 30/0 10/0 20/0 20/0 30/0 30/0


 

Inferential TIGoRS

Although less common than in recognition mode, the cocktail party automaton does demonstrate TIGoRS and does so to a much wider range of patterns. The greatest variety of responses occurs under asynchronous updating. Surprisingly, the introduction of fixed elements reduced the occurrence of TIGoRS.

If TIGoRS is to be useful as an encoding mechanism, it seems reasonable that it be possible to introduce some variability into the responses generated by the system. The cocktail party automaton was seeded with a fixed rule and then the above simulations were repeated. Runs were carried out and comparisons were made between cocktail party automaton having different seeding rules and synchronization conditions. Seeding was 50%. Table 2 shows the number of runs (maximum 64) in which the Hamming distance between the two responses was greater than 15, indicating an absence of TIGoRS.



 
Pattern Class
Rule

0

1

2

3

4

5

6

7

0

54

61

56

61

61

63

62

62

1

64

64

50 

63

60

56

61

57

2

64

64

56

64

64

60

63

63

3

60

64

59

63

62

60

64

62

4

64

64

60

64

63

62

63

63

5

64

64

58

63

64

63

63

63

6

64

64

60

64

62

61

64

62

7

64

64

60

64

62

64

64

64


Comparison Runs

The table demonstrates that virtually all runs mismatch which indicates that seeding a cocktail party automaton with a set of fixed elements induces a differential effect upon the responses induced by an external stimulus under inferential mode. This is exactly what we would like to have happen in order for TIGoRS to serve as a basis for a neural encoding process. Unfortunately, the occurrence of TIGoRS with a seeding of 50% was relatively infrequent.

Conclusion

TIGoRS provides a mechanism through which information is encoded, not as an invariant property of the agents which comprise the system, but as a dynamical response, triggered by an appropriate context and determined by a dynamic interaction between that context and the system. There is no engram, nor a fixed encoding. Instead the response is dynamically constructed through the interaction with the environment, being made up on the spot as it were. In effect, the system can be viewed as extracting salient information out of the environment in order to construct its response rather than storing the necessary information within its own structure. Indeed no stable structure exists within these automata. At best, any storage of information can be conceptualized in terms of a probability distribution of responses. The response is not a single precise spatiotemporal pattern as would be expected in a temporal coding scheme. Nor is it a coarse rate coding. Instead it is a finely detailed probability distribution within the larger space of response patterns. In a sense one achieves a stable pattern of response in the absence of a neural code, and in the absence of a system of representation.

The cocktail party automaton is still too limited to serve as a model for cognition in living brains. Indeed, the cocktail party automaton possesses homogeneous, local connections, no timing delays in the transmission of internal signals, and nonphysiological rules. A major limitation is the relative lack of diversity in system responses. To some extent, the cocktail party automaton behaves like a single system. In order to introduce diversity into the responses it appears necessary to break much of the dynamical symmetry inherent in the model. One way to do this is to break the homogeneity of the local connections. Another possibility is to introduce timing delays in the transmission of information between elements. Just such characteristics are typical for another class of complex system, the tempered neural network. As mentioned previously, TIGoRS was first demonstrated in tempered neural networks, which are simple cellular automaton based neural networks possessing binary threshold neurons, both excitatory and inhibitory, a random interconnection structure with built in transmission delays, and a subjected to random input signals applied to a single neuron. These tempered neural networks produced complex patterns of bursts during external, purely excitatory stimulation. In response to identical stimuli, the networks produced responses which differed solely by a brief initial transient. Following the transient, the responses were identical. Thus the tempered neural networks produced a form of TIGoRS which was even more stringent than that exhibited by our cocktail party automaton. Although further work is clearly needed, I believe that the available evidence from both the cocktail party automata and tempered neural networks studies strongly suggests that TIGoRS provides a robust, dynamical foundation for a nonrepresentational, embodied, situated cognition.

Regardless of one's opinion on this point, the major goal of this paper has been achieved, namely, to demonstrate that in complex adaptive systems it is possible to demonstrate the existence of invariant relationships between a system and its environment in the absence of such invariance at the level of the individual agents which comprise the system. Computation at the macro-level need not require computation at the micro-level. Moreover micro-level processes may be dependent upon context at the macro-level and may not be fully understandable in the absence of such contextual knowledge. Future theories of neural computation will need to take such high level contextual factors into account. Invariance of function cannot be assume but will require demonstration across a host of differing contexts. As suggested by Cohen and Stewart in it The Collapse of Chaos, future theories in science will mostly likely need to be contextual theories. It is hoped that the present paper will have provided some evidence in favour of such a viewpoint.


References

Aertsen Ad. and Arndt M., Curr. Opin. Neurobiol. 3, 586 (1993).
Bostock E. et. al., Hippocampus 1, 193 (1991).
Coh J. Cohen and I. Stewart, =The Collapse of Chaos+ (Viking, New York, 1994).
Eckhorn R. et. al., Biol. Cybern. 60, 121 (1988).
Gerstein G. and Mandelbrot B., Biophys. J. 4, 41 (1964).
Gray C.M. et. al., Vis. Neurosci. 8, 337 (1992).
Hubel D.H. and Weisel T.N., J. Physiol. (London) 195, 215 (1968).
Jung M.W. and McNaughton B.L., Hippocampus 3, 165 (1993).
Mainen Z.F. and Sejnowski T.J., Science 268, 1503 (1995).
Mason A. et. al., J. Neurosci. 11, 72 (1991).
McNaughton B.L. et. al. J. Exp. Biol. In press (1996).
Neisser U., ed., =Remembering Reconsidered+ (Cambridge University Press, Cambridge, 1987).
J. O'Keefe and J. Dostrovsky, Brain Res. 34, 171 (1971).
R. Pfeiffer, Exp. Brain Res./ 1, 220 (1966).
M. Posner (ed.) =Foundations of Cognitive Science+. Cambridge, MA: The MIT Press (1989).
H. Roitblatt J. Arcady-Meyer (eds.) =Comparative Approaches to Cognitive Science+. Cambridge, MA: The MIT Press (1995).
G.J. Quirk et. al., J. Neurosci. 10, 2008 (1990).
M.N. Shadlen and W.T. Newsome, Curr. Opin. Neurobiol. 4, 569 (1994).
W. Softky, Neuroscience 58, 13 (1994).
W. Softky and C. Koch, J. Neurosci. 13, 334 (1993).
C. F. Stevens and Y. Wang, Nature 371 (6499), 704 (1994).
W. Sulis, World Futures. 39, 225 (1994).
W. Sulis in Proceedings of the International Joint Conference on Neural Networks '92. Vol. III/ (IEEE Press, Baltimore, 1992).
W. Sulis in Advances in Artifical Life, Lectures Notes in Artificial Intelligence 92, eds. F. Moran et. al. (Springer-Verlag, New York, 1995).
W. Sulis in Proceedings of the World Congress on Neural Networks '93. Vol. IV 452 (Lawrence Erlbaum, New York, 1993).
W. Sulis in Lectures in Complex Systems, eds. D. Stein and L. Nadel (Addison-Wesley, New York, 1995).
E. Vaadia et. al., Nature 373 (6514),515 (1995).


Bill Sulis: Home Page E-mail; sulisw@mcmaster.ca
Please use your browser's back facility to return to your previous location.