Document Actions

Thematic Context

Artificial Neural Networks (ANN) are a powerful mathematical method for modeling the parallel and distributed processing (PDP) of information (see e.g. Rumelhart, McClelland et al. 1987). PDP is ubiquitous in biological contexts, particularly in brains and nervous systems of animals, but is also used to model sophisticated technical applications such as fingerprint detection. The architecture of ANN resemble the cellular architecture of brains, in which neurons are highly interconnected. Likewise, ANN are based on artificial, elementary computational units that are highly interconnected. For the sake of brevity, the artificial computational units are called 'neurons' like their biological template. A basic assumption of information processing in single neurons of such networks is that they receive input signals and transform these input signals into output signals that are transmitted to the next neuron in processing direction. The processes involved in this 'input-output behaviour' of a single neuron are highly complex in biological exemplars. In artificial neurons of ANN, this input-output behaviour is more abstract in that the exact spatiotemporal processing is neglected and the neuron is most often treated as a single point in space and time. Despite this simplification, the specific computations performed by an artificial neuron may be manifold and not always trivial to understand. The functions that determine the input-output behaviour are called 'transfer functions', i.e. activation functions and output functions.

Application Context

The introductory simulation on transfer functions allows to test different activation and output functions with an immediate visualization of the effects of change. An introductory topical text, instructions for use and exercises are also provided. This package is designed for about 90 minutes of use for novices in the field of neuroinformatics as well as for neuroscientists with basic mathematical background. An additional theoretical introduction into artificial neural networks or system-theory/biological cybernetics is recommended. The previous knowledge recommended, but not provided in the package comprises: the history of ANN, neurobiological basics, general structure of ANN and mathematical background (e.g. Linear Algebra). Such background information can be apprehended in introductory books on neuroinformatics (Anderson 1995, Arbib 2003, Cruse 1996, Haykin 1999, Dayan and Abbott 2001, Fausett 1994, Rumelhart et al. 1987), in review articles (i.e. see Duch and Jankowski 1999) or in a course on neuroinformatics as they are offered at universities in the bachelor, master or graduate phase, depending on the orientation of studies. The package can be used in different ways:

  • Single-user: Individual deepening of knowledge on transfer functions in artificial neural networks.

  • Talk: Demonstration of transfer functions in scientific talks or lectures.

  • Course: Exercise during a computer based practical or homework.

Experiences

The simulation on transfer functions was applied in a basic lecture on neuroinformatics for engineers to demonstrate the effect of different transfer function on the input-output behaviour of artificial neurons. In the same context, the simulation was used as an exercise that learners had to individually work out by using the simulation provided on an institute server. The solutions were to be sent to the tutor as a free text via email. The exercise was performed as a substitution for a paper worksheet in three repetitions of the same course, one each year, by far more than 100 students each year. The experiences indicate that this application scenario for the simulation works as the majority of the solutions were correct but not uniform. The evaluation of individual solutions is, however, still a labor-intensive venture that could be optimized by automatic routines.

References

Anderson, J.A., 1995. An Introduction to Neural Networks. Bradford Book,

Arbib, M.A., 2003. Handbook of Brain Theory and Neural Networks, Bradford Book, 2nd Edition.

Cruse, H., 1996. Neural Networks as Cybernetic Systems. Thieme Stuttgart.

Dayan, P. and Abbott, L.F., 2001. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems, MIT-Press, Boston.

Duch, W. and Jankowski N., 1999. Survey of Neural Transfer Functions, Neural Computing Surveys, Vol 2., p. 163--212.

Fausett, L., 1994. Fundamentals of Neural Networks, Prentice Hall International.

Haykin, S., 1999. Neural Networks: A Comprehensive Foundation, Prentice Hall, 2nd Edition.

Rumelhart, D., McClelland, J. and The PDP Research Group, 1987. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, (2 Vols.), MIT Press.

Requirements

The supplementary material contains a tutorial in HTML with an embedded simulation (JAVA-applet). The tutorial consists of a theoretical introduction, a short manual, an exercise worksheet and summarizing supplement datasheet. The material can be used with a standard internet browser with JAVA-PlugIn 2.0 or higher. The tutorial is provided persistently at the URL named below and can be used directly. Alternatively, it can be downloaded as a content package (ZIP-archive) and can be unpacked for offline use (open the file bmm-debes-suppl-050704.html).

License

Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the Digital Peer Publishing License. The text of the license may be accessed and retrieved via Internet at http://www.dipp.nrw.de/lizenzen/dppl/dppl/DPPL_v2_en_06-2004.html