Monday, 15 June 2009

The General Problem Solver

Paradigms of Artificial Intelligence Programming has been sitting on my desk for a few months since I worked through the Eliza chapter, so time for me to brush it off and (finally) work my way through it.

The General Problem Solver is early example of computer AI. Herbert Simon made a speech at the 12th National Meeting of the Operations Research Society where he stated:

It is not my aim to surprise or shock you- if indeed that were possible in an age of nuclear fission and prospective-interplanetary travel. But the simplest way I can summarize the situation is to say that there are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until - in a visible future -the range of problems they can handle will be coextensive with the range to which the human mind has been applied. (Heuristic Problem Solving: The Next Advance in Operations Research [PDF])


That's a pretty big claim that proves (unsurprisingly) to be slightly wide of the mark!

The GPS solution solves problems by gathering the following information:


States and goals are represented using symbols. We represent the initial state of the world as a set of symbols, for example, #{'Bored 'At Home'}, indicates that the current state is bored and at home and the goal condition might be #{'Happy 'Inebriated}. Given a series of operators, GPS will attempt to find the way there.

An operator alters the current states. An operator consists of an action (a friendly name for what is taking place), a set of preconditions and an add/remove list which affect the current state of the world. For example, the operation drink might be defined as {:action drink :preconditions #{'have-money} :add-list 'drunk :remove-list 'money}.

The complete code is shown below. Note that it has mutable state and that references are used to hold this. This is enforced by Clojure and is one of the key differences between Lisp and Clojure.

One thing that caught me out was contains? which checks the presence of an INDEX not a value. Hence, the contains-value? definition.


;;; Implementation of a simplified General Problem Solver
;;; http://en.wikipedia.org/wiki/General_Problem_Solver
;;; Based on code in "Paradigms of Artificial Intelligence Programming"
(ns uk.co.fatvat.gps
(:use [clojure.set]))

(defstruct operation :action :preconditions :add-list :delete-list)

(defn make-op
[action preconditions add-list delete-list]
(struct operation action preconditions add-list delete-list))

(def *state* (ref nil))

(def *ops* (ref nil))

(defn contains-value?
[coll val]
(not (nil? (some (partial = val) coll))))

(defn appropriate?
[goal operation]
"An op is appropriate to a goal if it is in its add list"
(contains-value? (:add-list operation) goal))

(defn apply-op
[op]
"Print a message and update state if op is applicable"
(when (every? achieve (:preconditions op))
(println "Executing: " (:action op))
(dosync
(alter *state*
(fn [s]
(union
(difference s (:delete-list op))
(:add-list op)))))))

(defn achieve
[goal]
"A goal is achieved if it already holds. Or if there is an appropriate
operation for it that is applicable"

(or (contains-value? @*state* goal)
(some apply-op (filter (fn [x] (appropriate? goal x)) @*ops*))))

(defn gps
[state goals ops]
"General Problem Solver: Achieve all goals using the operations available."
(when (every? achieve goals) 'solved))


So having written all this code, how'd we use it? All we have to do is define the problem in terms that GPS can understand. The example below shows a common problem. I'm at home, with money and Facebook, however I'd much rather be at a bar and dancing the night away. However, I need to get to the bar and before I can dance I really need a drink.



(def example-operations
[(make-op 'message-friends
#{'have-facebook}
#{'friends-emailed}
#{'at-home})

(make-op 'arrange-party
#{'friends-emailed}
#{'party-arranged}
#{})

(make-op 'get-on-bus
#{'party-arranged}
#{'at-club}
#{})

(make-op 'drink
#{'have-drink}
#{'had-drink}
#{'have-drink})

(make-op 'dance
#{'had-drink}
#{'dancing}
#{})

(make-op 'give-bar-money
#{'have-money 'at-club}
#{'bar-has-money 'have-drink}
#{'have-money})])

(defn example []
(dosync
(ref-set *state* #{'at-home 'have-money 'have-facebook})
(ref-set *ops* example-operations))
(gps @*state*
#{'dancing 'at-club}
example-operations))


Running this at the REPL shows me the way to go:


uk.co.fatvat.gps> (example)
Executing: message-friends
Executing: arrange-party
Executing: get-on-bus
Executing: give-bar-money
Executing: drink
Executing: dance
solved


This isn't very general, but we'll save that till next time!

Labels: , ,


Sunday, 14 June 2009

The Shunting Yard Algorithm (2)

In my previous post, I looked at the Shunting Yard algorithm and implemented a *really* bad version which required you to escape parenthesis. Not very Lispy!

The code below makes a very small change which converts nested expressions (via lists) into RPN.



(defn shunting-yard
([expr]
(shunting-yard expr []))
([expr stack]
(if (empty? expr)
stack
(let [token (first expr)
remainder (rest expr)]
(cond
(number? token) (lazy-seq
(cons token (shunting-yard remainder stack)))

(operator? token) (if (operator? (first stack))
(let [stackfn (partial op-compare token)]
(lazy-seq
(concat (take-while stackfn stack)
(shunting-yard remainder
(cons token
(drop-while stackfn stack))))))
(shunting-yard remainder (cons token stack)))

(sequential? token) (lazy-seq
(concat (shunting-yard token) (shunting-yard remainder stack)))

:else (assert false))))))



This is much nicer as now, as the tests below show, you can write expressions in a more natural way.



(deftest test-shuntingyard

;; Basic operators defined
(is (= (pretty-print (shunting-yard [100 + 200])) '(100 200 +)))
(is (= (pretty-print (shunting-yard [100 * 200])) '(100 200 *)))
(is (= (pretty-print (shunting-yard [100 / 200])) '(100 200 /)))
(is (= (pretty-print (shunting-yard [100 - 200])) '(100 200 -)))

;; Test some precedence rules
(is (= (pretty-print (shunting-yard [100 * 200 + 300])) '(100 200 * 300 +)))
(is (= (pretty-print (shunting-yard [100 + 200 * 300])) '(100 200 300 * +)))

;; Redundant Parenthesis
(is (= (pretty-print (shunting-yard [[[[1 + 1]]]] '(1 1 +)))))
(is (= (pretty-print (shunting-yard [1 + [1]] '(1 1 +)))))
(is (= (pretty-print (shunting-yard [[1] + [1]] '(1 1 +)))))

;; Bracketing of expressions
(is (= (pretty-print (shunting-yard [[1 + 2 + 3] + 4])) '(1 2 + 3 + 4 +)))
(is (= (pretty-print (shunting-yard [[1 + 2 * 3] + 4])) '(1 2 3 * + 4 +)))
(is (= (pretty-print (shunting-yard [[4 + 5] / [7 - 8]])) '(4 5 + 7 8 - /))))


I was thinking (as was Mark) that I could write a macro in order to simplify writing expressions. The idea would be to write something to convert infix to prefix (rather than postfix as here).

On a slight tangent, in other Lisps, you can write a reader macro which would allow you to annotate an expression with a symbol that your reader would use to convert into the appropriate form. Clojure doesn't allow reader macros (there's a discussion here [see around 19:00] as to the reasons why).

Before we write this as a macro, we'll first need an evaluator loop. This is dead simple and just involves (again) a stack. This is a very basic version that assumes all operators have arity 2.



(defn rpn-evaluator
([expr]
(rpn-evaluator expr []))
([expr stack]
(if (empty? expr)
(if (= 1 (count stack))
(first stack)
(assert false)) ;; unbalanced
(let [f (first expr)]
(cond

(number? f) (recur (rest expr) (cons f stack))

;; Should look up arity of operator!
(operator? f) (recur (rest expr)
(cons
(f (second stack) (first stack))
(drop 2 stack))))))))


Having written this code, what does it buy me to write it as a macro? Not much! A macro is simply a translation of some source code to another lump of source code. In this case I'm better off writing a simple function, such as (def infix (comp rpn-evaluator shunting-yard)) to get the job done. The main point is that the macro doesn't have access to the values in the expression (unless they are constant). If the code was (infix [a + b]) then the macro can not evaluation a and b because the values will not be known at macro expansion time.

The one use of the macro that I can see would be for constant folding which I'm sure already occurs as part of the compilation process!

Programming Clojure gives a taxonomy of macros as:


None of these seem appropriate here to me, so I *think* writing it as a function is the right way to do it. I need to read On Lisp again since I think my macro understanding is a little off.

Labels: ,


Saturday, 13 June 2009

The Shunting Yard Algorithm

The Shunting Yard Algorithm converts expressions written in infix notation (i.e. with the operator between the operands) to Reverse Polish Notation (RPN). The algorithm was published by Dijkstra in November 1961 and the original paper can be found here.

RPN is often used in stack-based languages, such as Joy. In this respect, Clojure is to prefix notation as Joy is to stack-based notation.

The algorithm is simple and the description on Wikipedia is very complete; but it still was a great learning exercise for lazy sequences. The code implements a very simple expression parser based on a sequence of elements. Currently nesting expressions isn't supported directly, you have to use \( and \) to escape parenthesis.


;;; Shunting Yard Algorithm
(ns uk.co.fatvat.algorithms.shuntingyard
(:use clojure.contrib.test-is))

(defstruct operator :name :precedence :associativity)

(def operators
#{(struct operator + '1 'left)
(struct operator - '1 'left)
(struct operator * '2 'left)
(struct operator / '2 'left)
(struct operator '^ 3 'right)})

(defn lookup-operator
[symb]
(first (filter (fn [x] (= (:name x) symb)) operators)))

(defn operator?
[op]
(not (nil? (lookup-operator op))))

(defn op-compare
[op1 op2]
(let [operand1 (lookup-operator op1)
operand2 (lookup-operator op2)]
(or
(and (= 'left (:associativity operand1)) (<= (:precedence operand1) (:precedence operand2)))
(and (= 'right (:associativity operand1)) (<= (:precedence operand1) (:precedence operand2))))))

(defn- open-bracket? [op]
(= op \())

(defn- close-bracket? [op]
(= op \)))

(defn shunting-yard
([expr]
(shunting-yard expr []))
([expr stack]
(if (empty? expr)
(if-not (some (partial = \() stack)
stack
(assert false))
(let [token (first expr)
remainder (rest expr)]
(cond

(number? token) (lazy-seq
(cons token (shunting-yard remainder stack)))

(operator? token) (if (operator? (first stack))
(lazy-seq
(concat (take-while (partial op-compare token) stack)
(shunting-yard remainder
(cons token
(drop-while (partial op-compare token) stack)))))
(shunting-yard remainder (cons token stack)))

(open-bracket? token) (shunting-yard remainder (cons token stack))

(close-bracket? token) (let [ops (take-while (comp not open-bracket?) stack)
ret (drop-while (comp not open-bracket?) stack)]
(assert (= (first ret) \())
(lazy-seq
(concat ops (shunting-yard remainder (rest ret)))))

:else (assert false))))))

(defn pretty-print
[expr]
(map (fn [x]
(cond
(= x +) '+
(= x -) '-
(= x /) '/
(= x *) '*
:else x)) expr))

(deftest test-shuntingyard
(is (= (pretty-print (shunting-yard [100 + 200])) '(100 200 +)))
(is (= (pretty-print (shunting-yard [100 * 200])) '(100 200 *)))
(is (= (pretty-print (shunting-yard [100 / 200])) '(100 200 /)))
(is (= (pretty-print (shunting-yard [100 - 200])) '(100 200 -)))
(is (= (pretty-print (shunting-yard [4 + 5 + \( 6 * 7 \)] '(4 5 + 6 7 * +)))))
(is (= (pretty-print (shunting-yard [\( \( \( 6 * 7 \) \) \)] '(6 7 *)))))
(is (= (pretty-print (shunting-yard [3 + 4 * 5])) '(3 4 5 * +))))



Most of the entries on my blog could be considered Coding Dojo type tasks. I'm basically after problems I can solve in a few hours that I can learn something from!

Labels: ,


Wednesday, 10 June 2009

Generating Images using Image Magick

In order to play with neural networks for OCR, I needed to create a series of test images.

ImageMagick is a set of command line tools that provide image manipulation functions. It's pretty simple to write a quick script to generate a bucket load of images.

The script below generates a load of images of digits between 0 and 9 using every font available within image magick,


#!/usr/bin/perl
use strict;
use warnings;

# Gives me a list of fonts
my @fonts = split('\n', qx{identify -list font | grep Font: | cut -d: -f2});
my @numbers = (0..9);

foreach my $font (@fonts) {
chomp($font);
foreach my $number (@numbers) {
chomp($number);
my $filename = sprintf("%s_%s.gif", $font, $number);
my $commandLine = "convert -fill black -pointsize 8 -size 8x8 -font$font label:$number$filename";
qx{$commandLine};
}
}


5 minutes later, and I've got 340 test images. Neat.

Generated images showing big list of numbers

Labels:


Sunday, 7 June 2009

Learning some lessons from Programming Clojure

My copy of Programming Clojure by Stuart Halloway arrived over the weekend. It's a thoroughly enjoyable read so far and I've definitely learned a few things (for example, I hadn't realized that keywords can act as functions for maps, nor did I realize that a set is a function for determining membership. D'oh


;; A keyword is a function
user> (:houses {:biscuits 4 :houses 7})
7

;; A map is a function
user> ({:biscuits 4 :houses 7}) :houses})
7

;; A set is a function on membership
user> (#{4 5 6} 7)
nil

;; On success returns the element
user> (#{4 5 6} 4)
4


The section on Functional Programming was very interesting, especially with respect to laziness. Stuart presents "Six Rules of Clojure FP", which have a heavy emphasis on laziness. Item 5 is:

Know the sequence library. You can often write code without using recur or the lazy APIs at all


Looking back at my neural network code made me realize I'd reinvented several wheels there and the code could be significantly tidied by just using the standard libraries. Revisiting the sequence documentation helped remind me the various tools in the armoury, particularly iterate and reduce.

As a reminder, the two lumps of code in question are run-patterns and train-network.



(defn run-patterns
[network samples expecteds]
(if (empty? samples)
network
(let [expected (first expecteds)
sample (first samples)
[ah ao] (run-network sample network)
updated-network (back-propagate expected sample [ah ao] network)]
(recur updated-network (rest samples) (rest expecteds)))))

(defn train-network
([samples expected iterations]
(train-network (create-network (count (first samples))
num-hidden (count (first expected)))
samples expected iterations))
([network samples expected iterations]
(if (zero? iterations)
network
(recur (run-patterns network samples expected) samples expected (dec iterations)))))


Starting with run-patterns. This takes a sequence of values (the samples and expected values) and reduces it down to a single output (the network) using the supplied function. This can be refactored to be a simple reduce operation (sequence in, value out) and this simplifies the code significantly.


(defn run-patterns
[network samples expecteds]
(reduce
(fn [n expectations]
(let [[sample expected] expectations
[ah ao] (run-network sample n)]
(back-propagate expected sample [ah ao] n)))
network ;; initial value
(map list samples expecteds)))


Next is train-network. At the moment the implementation is burdened by a hand-rolled looping construct for a specified number of iterations. What if instead of calculating a fixed number, we just calculated an infinite amount of trained neural networks (with laziness obviously!) and let the caller decide what value they'd like?

The new version of train-network drops the number of iterations and returns an infinite sequence of neural networks, each trained one more time than the last. The Clojure function, iterate does the job here:

Returns a lazy sequence of x, (f x), (f (f x)) etc. f must be free of side-effects


Once we refactor to use iterate we get the following. The definition of train-network is considerably simpler, and the only change to the example function is specifying the number of iterations to use.



(defn train-network
([samples expected]
(train-network (create-network (count (first samples))
num-hidden (count (first expected)))
samples expected))
([network samples expected]
(iterate (fn [n] (run-patterns n samples expected)) network)))

(defn example[]
(let [x (nth (apply train-network xor-test-data) 100)]
(println (first (run-network [0 0] x)) "-->" 0)
(println (first (run-network [0 1] x)) "-->" 1)
(println (first (run-network [1 0] x)) "-->" 1)
(println (first (run-network [1 1] x)) "-->" 0)))



I've still got to read the remaining chapters of the book, but I'm looking forward to learning more. Definitely recommended.

Labels: ,


Wednesday, 3 June 2009

Back propagation algorithm in Clojure

In my previous post I looked at the most basic type of neural network, the single layer perceptron. Next, we'll look at the multi-layer perceptron network. This is a more powerful class of neural network than the single layer because it can handle non linearly separable data (such as the XOR test case which failed last time).

The main difference in a multi-layer perceptron is that each neuron is only activated based on the results of an activation function. This makes use of the kernel trick that maps a non-linear problem into a higher dimensional problem which is linearly separable (see also support vector machines). The mathematics behind this is better explained here [PDF].

A typical activation function is tanh, shown below between -10 and 10 and generated using Wolfram Alpha

Graph of tanh(x) from -10 to 10 from Wolfram Alpha

So how do we model this network in a functional programming language like Clojure? A typical solution using an imperative/OO language (see here for a good Python implementation) would use arrays representing the weights and modify things in situ, but that's not very functional.

We start by defining some constants (including the activation-function and its derivation). bp-nn represents the network itself.


(def activation-function (fn [x] (Math/tanh x)))
(def activation-function-derivation (fn [y] (- 1.0 (* y y))))
(def num-hidden 2)

(def learning-rate 0.5)
(def momentum 0.1)

(defstruct bp-nn :weight-input :weight-output :change-input :change-output)


Next we create some simple initialization functions to create an initial neural network, together with some helper functions for iterating over matrices (which we'll model as lists of lists). Usually you'd use random weights to initialize things, but allowing fixed values makes testing possible.



(defn make-matrix
[width height]
"Create a matrix (list of lists)"
(repeat width (repeat height 0)))

(defn matrix-map
[m func]
"Apply a function to every element in a matrix"
(map (fn [x] (map func x)) m))

(defn rand-range
[l h]
"Return a real number within the given range"
(+ (rand (- h l)) l))

(defn create-network
([input hidden output]
(create-network input hidden output true))
([input hidden output use-random-weights]
"Create a network with the given number of input, hidden and output nodes"
(let [i (inc input)
w-func (if use-random-weights (fn [_] (rand-range -0.2 0.2)) (fn [_] 0.2))
o-func (if use-random-weights (fn [_] (rand-range -2.0 2.0)) (fn [_] 2.0))]
(struct bp-nn
(matrix-map (make-matrix i hidden) w-func)
(matrix-map (make-matrix hidden output) o-func)
(make-matrix i hidden)
(make-matrix hidden output)))))



The first thing we should do is run a pattern through the network and see what comes out the other end. We're not just interested in the output result, we want to know what happened at the hidden layer, so we return a vector of ao (activation output) and ah (activation hidden).

comp is functional composition. ((comp x y) 5) is the equivalent of (x (y 5)) so in the example below we add the numbers together and then apply the activation function. The nested map calls allow us to iterate over the elements in a matrix.



(defn apply-activation-function
[w i]
"Calculate the hidden activations"
(apply map (comp activation-function +) (map (fn [col p] (map (partial * p) col)) w i)))

(defn run-network
[pattern network]
"Run the network with the given pattern and return the output and the hidden values"
(assert (= (count pattern) (dec (count (network :weight-input)))))
(let [p (cons 1 pattern)] ;; ensure bias term added
(let [wi (network :weight-input)
wo (network :weight-output)
ah (apply-activation-function wi p)
ao (apply-activation-function wo ah)]
[ao ah])))


In order to perform backwards-propagation we need a couple of helper functions that work on matrices and vectors to calculate changes in the output that will be used to update the weights.

These helper functions are pretty sick - (no-one wants to read (map (partial reduce +) ...). A better design would probably be to introduce a proper matrix abstraction. There's the beginnings of one here but this is a bit too "Java" syntax heavy for more liking.



(defn calculate-hidden-deltas
[wo ah od]
"Calculate the error terms for the hidden"
(let [errors (map (partial reduce +) (map (fn [x] (map * x od)) wo))] ;; Sick.
(map (fn [h e] (* e (activation-function-derivation h))) ah errors)))

(defn update-weights
[w deltas co ah]
(let [x (map
(fn [wcol ccol h]
(map (fn [wrow crow od]
(let [change (* od h)]
[(+ wrow (* learning-rate change) (* momentum crow)) change]))
wcol ccol deltas))
w co ah)]
[(matrix-map x first) (matrix-map x second)]))


I did warn you...

The next thing we need to implement is the back-propagation algorithm itself. This takes in more parameters than it needs to so that it can be tested standalone (it could be implemented as a local function using a closure to capture some of them). It returns an updated version of the neural network.



(defn back-propagate
[target p results network]
"Back propagate the results to adjust the rates"
(assert (= (count target) (count (first (network :weight-output)))))
(let [pattern (cons 1 p) ;; ensure bias term added
ao (first results)
ah (second results)
error (map - target ao)
wi (network :weight-input)
wo (network :weight-output)
ci (network :change-input)
co (network :change-output)
output-deltas (map (fn [o e] (* e (activation-function-derivation o))) ao error)
hidden-deltas (calculate-hidden-deltas wo ah output-deltas)
updated-output-weights (update-weights wo output-deltas co ah)
updated-input-weights (update-weights wi hidden-deltas ci pattern)]
(struct bp-nn
(first updated-input-weights)
(first updated-output-weights)
(second updated-input-weights)
(second updated-output-weights))
))


All that remains is to train the network. We need a set of samples with know results, together with a number of iterations to try. I've split these into run-patterns which runs through the patterns once, and train-network> which creates the initial network and runs it through the patterns the specified number of times.



(defn run-patterns
[network samples expecteds]
(if (empty? samples)
network
(let [expected (first expecteds)
sample (first samples)
[ah ao] (run-network sample network)
updated-network (back-propagate expected sample [ah ao] network)]
(recur updated-network (rest samples) (rest expecteds)))))

(defn train-network
([samples expected iterations]
(train-network (create-network (count (first samples))
num-hidden (count (first expected)))
samples expected iterations))
([network samples expected iterations]
(if (zero? iterations)
network
(recur (run-patterns network samples expected) samples expected (dec iterations)))))


So how well does it work in practise? Pretty damn good. It correctly converges after a few iterations (this time 100) and consistently gets the XOR test data set correct.


(defn example[]
(let [x (apply train-network (conj xor-test-data 100))]
(println (first (run-network [0 0] x)) "-->" 0)
(println (first (run-network [0 1] x)) "-->" 1)
(println (first (run-network [1 0] x)) "-->" 1)
(println (first (run-network [1 1] x)) "-->" 0)))

;;; Example run through the REPL
uk.co.fatvat.mlperceptron> (example)
(0.10717792758953508) --> 0
(0.993502708495955) --> 1
(0.9930515903590437) --> 1
(0.00883530181467182) --> 0
nil


Total code weighs in at ~100 of so lines of purely functional code. This made it a doddle to test. Coding this was an exercise in map masochism. Never have I had to construct so many convoluted map expressions. At least it works in the end. It feels like it would be considerably simpler to implement this using mutability, I might try that for comparison purposes. Any suggestions on improving the code much appreciated! Full code is on GitHub.

(update - read some of Programming Clojure and applied some of the lessons learnt - see here.)

Neural networks are pretty interesting and in the next post I'll look at how to implement basic OCR using them.

Labels: ,


This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]