COMP305

Biocomputation

Aims

  1. To introduce students to some of the established work in the field of neural computation.
  2. To highlight some contemporary issues within the domain of neural computation with regard to biologically-motivated computing particularly in relation to multidisciplinary research.
  3. To equip students with a broad overview of the field of evolutionary computation, placing it in a historical and scientific context.
  4. To emphasise the need to keep up-to-date in developing areas of science and technology and provide some skills necessary to achieve this.
  5. To enable students to make reasoned decisions about the engineering of evolutionary ("selectionist") systems.

Syllabus

1 BIOLOGICAL BASICS AND HISTORICAL CONTEXT OF NEURAL COMPUTATION

  • neurones,synapses, action potential, circuits, brain, neural computation and computational neuroscience
  • associationism, instructivism, Hebb's rule, the McCulloch-Pitts Neuron, the rise of cybernetics and GST, Macey Conferences, Perceptron and non linear sepearbility, dynamical systems, emergent computation, etc (3 Lectures)

2 THE MULTILAYERED PERCEPTRON

  • contrast with Perceptron. Representation. Feedforward and feedback phases. Sigmoidal functions, activation, generalised delta rule, adaptation and learning, convergence, gradient descent, recent developments (3 Lectures)

3 KOHONEN SELF ORGANISING MAPS

  • nature of unsupervised learning, clustering and comparisons with statistical methods such as k-means and PCA, Iris data set, competitive learning, the learning algorithm (3 Lectures)

4 THE INTERPRETATION PROBLEM

  • nature and issues related to problems using ANNs including symbol grounding, bootstrap, representation. Issues in practice (3 Lectures).

5 BIOLOGICALLY-INSPIRED DESIGNS AND COMPUTATIONAL NEUROSCIENCES

  • resume based on Shepherd, Koch et al (3 Lectures)

6 INTRODUCTION TO EVOLUTIONARY COMPUTATION

  • historical review, describing the selectionist paradigm (3 Lectures)

7 BIOLOGICAL MOTIVATION

  • basic genetics, population dynamics and "fitness" (3 Lectures)

8 THE BASIC STRUCTURE OF THE GENETIC ALGORITHM (3 Lectures)

9 CASE STUDIES OF APPLICATIONS OF GENETIC ALGORITHMS (3 Lectures)

10 WHY DO GENETIC ALGORITHMS WORK ?

  • The Schema Theorem ("Building Block Hypothesis") (2 Lectures)

11 OTHER EVOLUTIONARY METHODS

  • Genetic Programming, Classifier Systems, Evolutionary Strategies (1 Lecture)



Recommended Texts

Part I: Artificial Neural Networks.
  1. William W. Lytton. From Computer to Brain. Foundations of Computational Neuroscience. 2002. Springer-Verlag New-York, Inc.
  2. Judith Dayhoff. Neural Network Architectures. 1990. Van Nostrand Reinhold, New-York.
  3. Handbook of Neural Computation. Ed. E. Fiesler and R. Beale, 1996. Oxford University Press.
  4. Raul Rojas. Neural Networks. A systematic introduction. 1996. Springer-Verlag Berlin Heidelberg.
  5. Simon Haykin. Neural Networks. A Comprehensive Foundation. Second Edition. 1999. Prentice-Hall, Inc.
  6. Jacek M. Zurada. Introduction to Artificial Neural Systems. 1992. West Publishing Company.
  7. K. Mehrotra, C.K. Mohan, S. Ranka. Elements of Artificial Neural Networks. 1997. Massachusetts Institute of Technology.
  8. Peter Dayan and L.F.Abbot. Theoretical Neuroscience. Computational and Mathematical Modelling of Neural Systems. 2001. The MIT Press
Part II: Genetic Algorithms.
  1. Handbook of Evolutionary Computation, Ed. T. Back, David Fogel, and Zbigniew Michalewicz, Oxford University Press, 1997.
  2. David E.Goldberg. Genetic Algorithms in Search, Optimisation, and Machine Learning. 1989. Addison Wesley Longman, Inc.
  3. Melanie Mitchell. An Introduction to Genetic Algorithms. Fifth Printing, 1999. "A Bradford Book", The MIT Press. Cambridge, Massachusetts; London, England.
  4. Handbook of Genetic Algorithms. Ed. Lawrence Davis. 1991. Van Nostrand Reinhold, New-York
  5. Thomas Back. Evolutionary Algorithms in Theory and Practice. Evolution Strategies. Evolutionary Programming. Genetic Algorithms. 1996. Oxford University Press, New York Oxford.

Learning Outcomes

By the end of the module students will be expected to:

  • Account for biological and historical developments neural computation
  • Describe the nature and operation of MLP and SOM networks and when they are used
  • Assess the appropriate applications and limitations of ANNs
  • Apply their knowledge to some emerging research issues in the field
  • Understand how selectionist systems work in general terms and with respect to specific examples
  • Apply the general principles of selectionist systems to the solution of a number of real world problems
  • Understand the advantages and limitations of selectionist approaches and have a considered view on how such systems could be designed

Learning Strategy

  • Didactic component - the core of the teaching is lecture-based with Q/A and feedback
  • Self-learning component - students are encouraged to read around the subject materials
  • Comprehension/review exercise - two courseworks, one on Artificial Neural Networks and one on Genetic Algorithms, following supervised discussion and Q/A sessions in the seminars.
  • Case studies will be supplied to help students place the course material in context