See these streaming RealMedia files of the videos:
- http://webcast.berkeley.edu/stream.php? ... stid=14424
- http://webcast.berkeley.edu/stream.php? ... stid=14425
The idea is that humans have different input channels for their observations of the world. If you can combine these channels, you have have a working user interface. It also mentions how humans learn in different stages of child development, and that grownups still have these abilities which can be used.
See how the mouse, mouse pointer and display works:
Code: Select all
+--------------+
| v
hand moving eyes seeing brains combining
mouse device mouse pointer move mouse pointer
| ^
+----------------------+
motor sense visual sense abstraction by brain
Your hands give motor sensor impulses back to the brain, so your hands determine where the mouse pointer is on the screen. Your eyes see the mouse pointer and gives visual sensor impulses back to to brain how the mouse pointer moves, relative to other objects on the screen. The brain combines both input channels and can move the mouse pointer from one object on the screen to the other.
Mouse buttons are simply a way to prevent that you need to move your hands back and forth to the keyboard. Therefore, mouse buttons are a simplified version of the keyboard.
Now I'm curious if it is possible to combine other input channels to create new input devices. I'm sure there is a lot of research taking place to give us the input devices of the future, because the keyboard-mouse-display interface is not very useful for creating animation, and we really need a better interface between the human and the computer for more productivity.