I respectfully disagree. The computations of day to day variables are trivial for a few PIII's. Take, for example, connected-speech recognition. I once saw a computer demonstration where a person spoke a few sentences into a microphone, and then the computer printed a transcript with 100% perfect vocabulary, syntax, grammer, spelling, capitalization, and punctuation. There was a 15 minute delay between the speaking and the printing, but this was in the late **60's**. It was the context analysis (red vs. read, reed vs. read, etc.) which took so long, but the IBM mainframe had only 1MB of memory.
Memory always has been the driver. There is a huge gap between what *is* being done with neural net processing, and what *can* be done. The fabrication technology necessary for a human-brain-equivalent "chipset" has been around for at least 10 years. That leaves memory and I/O bandwidth as the remaining hurdles. Next-gen serial I/O techniques will be more than enough, so all that's left is really dense, really fast, really huge memory.
The areas of the brain which handle each of the 5 senses are physically separate and function semi-independently, so a distributed computing model works very well as an analog. Actually, a hybrid computer with digital processing for logic and autonomic functions, and true-analog processing for judgement and some sensory stuff would be terrific, if we pour a few 100 billion dollars into shrinking the analog processing the way we have the digital stuff over the last 30 years.
Back in another life, I spent 6 years as the resident EE for a group of Experimental Psychologists. The amount of computing "horsepower" (and hardware) it takes to emulate a specific function or action, such as tracking a moving targer, is surprisingly small. In the same way the 500,000 words of the English language break down into combinations of only 65 elements, the number of basic functions the brain performs to get through the day is under 1000. Beyond that, it's just lots of processing channels and lots of speed.