Scheme Neural Network

Two months ago I wanted to experiment with artificial neural networks in Scheme. I've written one in C++ before, but I wanted a pure Scheme implementation. So I wrote one. Grab it here,

git clone git://github.com/skeeto/Scheme-Neural-Network.git

It fakes OOP with closures and a dumb message passing scheme so that I could treat individual neurons like objects. The neurons are wired up to push orders along backwards, so most of the work from the implementation point of view is on the output neurons.

If I found an application for it I thought it would be neat to save off the network weights, thus storing the "brain", as an s-expression that could be dropped directly in the source code (something Lisp does very nicely). Even better, if it was a programming competition I could obfuscate the neural network implementation a bit — so that no one knew it was a neural network — and have a mysterious, opaque lump of mixed code and data.

Using it is really simple: once the neural network functions are loaded creating a new arbitrary network is as easy as the following. This creates a network with 4 input neurons, a hidden layer of 5 neurons, a hidden layer with 6 neurons, and an output layer with 3 neurons.

(new-ann '(4 5 6 3))  ; input 4 bits, output 3 bits

Then the functions train-ann and run-ann are used, with a list of bits as arguments, to train and run the neural network. Included in the repository are two examples showing this in action, using the provided helper functions int->blist (integer to bit list) and blist->int.

Like most of my neural network experiments this is really disappointing. It doesn't seem to work well outside some trivial or contrived situations, and it quickly slows down after adding only a couple more layers. It might just have to do with the type of network I'm using, and perhaps I should look into more advanced and complex neurons.

blog comments powered by Disqus

null program

Chris Wellons

(PGP)