Thursday, September 30, 2010

"Programming massively parallel systems" AKA Steve Furber 'Brains in silico'

Prof Steve Furber provided a thought provoking lecture on 'Brains in Silico' yesterday at the SICSA - Seven Keys to the Digital Future Series.

Problem definition: Chip designers will hit a wall in terms of the amount of processing power that can be economically squeezed onto a microprocessor. In response to this multi-core systems are being promoted such that we have multiple microprocessors that are equivalent in raw processing power to one that is more powerful but uneconomical to design, build and mass produce. The problem with this approach is that we're not very good at engineering software to use multiple microprocessors as, ultimately, humans are not good are thinking about things that are highly parallelized and synchronised.

Solution: Steve Furber advocates that to tackle the challenge of massively parallelized computation (e.g. 100,000 CPUs) we should explore the idea of abandoning synchronization and shift to asynchronous computation.

Rational: Steve Furber is inspired by biological systems (e.g. neurons and brains). They consume little power and can be more computationally powerful than current super-computers (depending on how you measure computational power). He claims, as an approximation, a human brain is the equivalent of an EXA-scale supercomputer yet only uses approx 40 watts of power - today's supercomputer use ~1.32 MWatts and they are 10 years away from EXA-scale.

Challenging Implications:
  • By switching to massively asynchronous computation we will lose determinism. Massively asynchronous computers cannot give guaranteed exact solutions to a computation nor will they give exactly the same answer each time. In other words, the result of a computation will have a, hopefully very high, likelihood of being close enough to the exact answer that the result is useful for your purposes. (Not as bad as it may initial sound as today's floating point calculations are approximations and people seem to live with them in most situations.)
  • By switching to massively asynchronous computation the way we 'program' computers will change. We need to switch from seeing programs as recipes (e.g. sets of instructions to be executed) to seeing programs as descriptions of neural networks (e.g. descriptions of the initial relationship between simple 'components' that form an adaptive and evolving network). This means we need to significantly develop our understanding of neural networks so we can architect the equivalent of generic functions in programming languages, or integrated circuits in electronics, so that we can connect them together to engineer systems that perform useful work. (See neural systems engineer.)
Is highly parallel asynchronous computation realistic in the short term?
  • In the short term, Steve is optimistic about its potential for niche applications such as visual processing. He has developed a bespoke software/hardware platform called SpiNNaker (that is scalable to 100,000 processors e.g. 100 million neurons) that enables neural networks to be described in a 'programming language' PYGMALION
  • In the short term, IBM has developed BlueGene and is also selling the technology commercially as the 'Blue Gene Solution' for applications such as "hydrodynamics, quantum chemistry, molecular dynamics, climate modeling and financial modeling".
  • In the long term, highly parallel asynchronous computation may well be suitable for more general computing as long as determinism does not need to be satisfied.