Not finished yet. Give me time .......
Having some background in real-time control systems, it was perhaps inevitable that I should think about ways of using machines for rehabilitation purposes. The first step beyond thinking happened around the end of 1986, when a project emerged from talks with representatives of the Auckland Crippled Children's Society ( CCS ). That proposal concerned an application of a conventional robot, but other sorts of machine can be useful for rehabilitation purposes; the most obvious example is the wheelchair, but other special-purpose machines are also of value.
( You're really supposed to call the CCS CCS, partly for reasons of political correctness and partly because the children had a way of growing up, and no one wanted to abandon them - but as nobody knows what CCS means, it's almost invariably written as "CCS ( formerly the Crippled Children's Society )", which rather spoils the point. )
The CCS wanted a machine with which an adolescent boy could feed himself, which would be valuable not only for the feeding ability itself but also because of the greater independence, which would increase the boy's self- confidence. A simple robot was a possibility, and I undertook to give it some thought. The result was that I offered an MSc thesis project on the development of the software, and Shane Clerk took it up. At the time, the department did not possess a robot, but the CCS undertook to find one. The project unfortunately foundered; that wasn't Shane's fault - though the CCS found a robot, their negotiations to acquire it at a price they could afford fell through, so it never turned up. ( Shane did something else instead. ) The effort wasn't entirely wasted; it left me with some understanding of the issues involved, which turned up again later.
It also left me with a knowledge of the very high cost of even a simple robot, and in consequence I began to wonder whether it was realistic to work on robots as such. That led me to consider other machines which might be useful, and I was therefore receptive to a suggestion I received while on leave in 1988. It came from a woman who customarily used a wheelchair. On being asked whether any automatic device would be of assistance to her, she immediately suggested a vacuum cleaner which she could control from her wheelchair without having to push it. I gave some thought to such a system while on leave, but didn't have time to do any more; eventually, Mike Diack thought it would be interesting as another MSc thesis topic. This idea foundered too - because just as Mike was about to start work, a Korean firm produced a commercial model of just such a device, and we didn't think it worth trying to compete. ( Mike did something else instead. )
Meanwhile, I'd been thinking further along the lines of machines which were not robots. It was clear that the major problem with machines which were not robots was that they were not robots : a special-purpose machine is good for its special purpose, but, unless the special purpose is very important or very frequently used, building a special machine is likely to be too expensive. While robots were expensive, there was at least a possibility that they could be made sufficiently useful for a wide variety of tasks that the cost could be justified. I wondered whether it was possible to specialise the task of a robot sufficiently to make it possible to design a cheaper sort of machine, which could nevertheless accomplish a large proportion of the jobs for which one might want to use a robot. The result was the Helping Hand ( Roy's title, my documentation ), a design which exploits the fact that the great majority of objects you might wish to pick up with a robot are already standing on some reasonably firm surface, and - for a rehabilitation system - the intention is very commonly to move them to another. This idea was taken up by Roy Davies, in another MSc thesis topic. Roy worked mainly on the design of the control system for such a machine, and successfully constructed a model. Unfortunately, it was a model of the control system, not of the Helping Hand itself; we never completed the construction of the hand, though Roy made a start.
I would very much like to take this line of work further, but experience, as described above, suggests that working with mechanical systems is not a thing we do well in our department. We don't have the technical backup to do the job properly, and we don't have the money to spend on experimental equipment. For that reason, I've concentrated more on communication topics, which can be accomplished almost completely inside a computer. Anyone who'd like to follow this topic, here or elsewhere, and has useful practical ideas of how it can be done ( and, for preference, the required skills and access to the required machinery ), do get in touch.
WORKING NOTES :
This is a fairly direct descendant of the feeder idea. While other people have successfully built feeders, no one ( so far as I know ) has successfully solved the safety problem : how do you know when to stop moving the food towards the person ? Most designs have brought the food to a safe position some centimetres in front of the person's face, and required him to lean forward to take the food from the carrying implement. Not everyone can do that.
The solution ( if there is one ) lies in finding ways to give the robot enough very precise information about where the person is to plan a course which is guaranteed safe. ( I take it as axiomatic that any such system would be covered with sensors which can be used to stop the machine if anything unexpected is detected. ) The only plausible way to gain this information without intolerably filling the environment with feelers or antennae is vision, and I am therefore interested in pushing intelligent computer vision to the lengths needed to determine very precisely the geometrical details of what it can "see".
Some students have worked on this topic, in one way or another. None of them has really addressed the central problem directly ( Tim Stucke was perhaps the closest ), because none of them was primarily interested in the rehabilitation application, but all have contributed something to the general area of intelligent vision. They are, in order of enrolment ( I think ) Tim Stucke, Mark Scaletti, Paul Qualtrough, Tim Natusch, and Igor Dekovich.
I now believe that the best chance of proceeding is to use the distortion network technique, as explored by Mark Scaletti. This is an attempt to use general knowledge of faces ( for example ) to find parts of a face in a computer image. The principle is to find how the image can be distorted to find the best match between the image and a standard pattern. If this can be done in a parallel machine with a strong resemblance to a neural network, there is a possibility of rather fast matching. I don't expect to start on faces; line drawings will do to begin with. Tim Natusch started work on it, but unfortunately had to give up. Nothing else has happened for quite some time.
Alan Creak,
2003 October.
Go to me;
Go to Computer Science.