Home Page for Stephen R. Deiss

Welcome to Applied Neurodynamics

“The 20-20 MINDSITE”
(Celebrating 10 years !!)

UPDATE: We’ll be soon starting a research to find out if the best gaming chairs 2017 affect neurodynamics.

Web page for Stephen Deiss dba Applied Neurodynamics, 2057 Village Park Way, #201, Encinitas CA 92024
760-944-8859 voice, 760-944-8880 fax, deiss@cerfnet.com
Comments to Steve
mailto:  deiss@cerfnet.com

? Neurodynamics ?
? Applied Neurodynamics ?
Services:
Accomplishments:
Pubs, TRs, and References:
Related Web Sites:
On a Personal Note:
Essays on Mind (under construction):

Neurodynamics
…is a term used here to represent a level of abstraction in the study of information processing in neural network activity, and to use this perspective to bridge from neuroscience to conscious experience and behavior.  Conventional neural network architectures are often simplistic feed forward or recurrent models where the timing of events is not important to the processing being done.  Dynamics studies causal systems where timing is a key consideration [1, 19].
Other inspiration came from Hebb’s visionary notion in 1949 of reverberating cell assemblies and the many modern reinterpretations now being used to understand perception [18].  Most important is the potential to extend this idea to context dependent sequential activation of neural emsembles to account for serial order in thought and action [7].  There is mounting evidence that brains do use population codes that are sensitive to temporal relationships on many time scales [ 14, 16,30].
The conclusion to use this approach arose from a personal inquiry into the mind-body problem and excursions into related areas in Psychology, Neuroscience, and Computer Science (see the section called On a Personal Note).
Applied Neurodynamics is a small business that has since 1988 specialized in the design of electronic embodiments of this kind of information processing architecture and related neurocomputing systems [see Accomplishments].  As the Owner-Founder and Chief Engineer (janitor too) I take great pride in just having lasted this long in a rather research oriented industry.  I am in for the long haul and mostly interested in the hard problems.
Menu…

Applied Neurodynamics
…is developing platforms that will support the detailed investigation of the behavior of neural assemblies with biologically plausible dynamics.  Such tools will make it possible to explore on a modest scale how the temporal patterns of activity among neurons form codes and sequences that can represent percepts, concepts, and action oriented decisions.  One recent example is the Silicon Cortex Board  [28, 29].
Experience gained is helping to point the way toward how to scale such a system architecture to handle difficult problems related to motor behavior, speech, language, vision, audition, and reasoning.  A key problem turns out to be how to engineer systems that can represent communication among neurons and how they collectively encode and recode or chunk information [ 10,  12-15].  Similar to the address-event representation (AER) and virtual wires, developed at Caltech and U Delaware, respectively, Applied Neurodynamics in 1989 independently developed a communication scheme for neural event messages called the space-time attribute code (STA, 15).  The latter is actually a generalization of the former with some scalability advantages.  Current work is focused on design of efficient electronic embodiments of the key elements of biological neural codes and how they are processed.
Looking at it intuitively from the top down, we can ‘see’ that pattern recognition and formation is the common denominator in all these different abilities [7, 8].  The syllogism of Western logic (“All men are mortal…”) is a case study in verbalized sequential pattern recognition at multiple levels of encoding.  (The metaphysics of “patterns” has not yet been fully appreciated in contemporary  scientific ontology).  Perhaps, if the basic mechanisms of neural activity pattern formation are unraveled, then many aspects of perception, cognition and action can be ‘bound’ in a ‘coherent’ brain theory (for starters).

Menu…

Services
For neurocomputing, neuromorphic engineering, general electronic design :
System specification, architectural studies, and design reviews
Logic design (high speed, CMOS, ECL, PECL)
Board level design ( PCI, ISA, VME, SBUS, custom)
Schematic entry (ORCAD, ViewLogic, Mentor DA)
FPGA/CPLD design and simulation using a variety of HDLs, VHDL and many device families.
Printed circuit board placement and routing (PADs)
PWB fabrication management (multilayer quick turn)
Component selection, purchasing, and kitting for assembly (DUNS 62-525-9155)
Assembly management including SMD techniques
Test and production of completed designs
Application and software development
Neural Communications R&D
In summary, we (one man gang) can take your design from vague idea to production on a budget.
Menu…

Completed Projects!
The following text highlights NN designs, protos and production runs done by Applied Neurodynamics:
Designed and prototyped a TMS320C50 DSP system (80 MHz) for multichip & multiboard analog VLSI neural network systems (Silicon Cortex Chips) based upon broadcast channels embedded in a VME chassis.  Included two custom designed high speed communication channels and eight Cypress VHDL based CPLDs.  Also did all board layout which was double sided surface mount with fine pitch SMDs all over.  (Partial funding from Dr. Rodney Douglas at MRC, Oxford, UK and from Dr. Carver Mead at CALTECH, Pasadena) [see Related Web Sites]
Designed and prototyped a microcoded 20MHz sequencer based upon 4 Xilinx 4005 series FPGAs to drive multiple AT&T Analog neural network ALU IC’s (ANNA) for imaging/OCR applications.  This architecture was placed on two boards.  One was a VME standalone, and the other was a mezzanine card for an MVME197 (MC88000) RISC CPU board.  (Ref. Dr. Charles Stenard at Bell Labs, Holmdel) [23-24]
Designed and produced a few dozen 50 MHz AT&T DSP32C board for ISA/AT bus neural network character recognition applications.  Personally built a few dozen before volume production started at AT&T/NCR.  (Ref. Ivan Strom at Bell Labs, Holmdel)
Designed and prototyped the ANNA/VME image processing and character recognition neurocomputer board with on-board DSP32C. Also ported this design to the ISA/AT bus and prototyped it.  (Ref. Dr. Bernhard Boser formerly at Bell Labs, Holmdel) [21-22]
Consulted in the troubleshooting of and prototyped a multiwire VME neurocomputer board using the NET32K image processing and recognition chip with the DSP32C.  (Ref. Dr. Hans Peter Graf at Bell Labs, Holmdel) [ 17]
Designed and prototyped a VME neurocomputer board based upon a new Stochastic Pulse Train network chip.  This board had 22 neural network IC’s on a single wide VME card.  (Ref. Dr. Stan Tomlinson formerly at Orincon Corp., San Diego) [27]
Designed and produced the EMB Prototyping board that holds up to 8 Intel ETANNs (Electronically Trainable Analog Neural Network ).  Over 70 were hand built.  (Ref. Mark Holler at Intel Corp., Santa Clara) [25]
Designed and prototyped a VME/VSB i860 processor board and parallel processing system.  (Ref. Dr. Cedric Armstrong at Science Applications International Corp., San Diego)
Designed two different stand-alone boards for Neural Semiconductor based upon the DNNA (stochastic nets) architecture.  (Ref. Stan Tomlinson formerly at Neural Semiconductor, Carlsbad) [26]
In addition to the above Neural Network Architectures Applied Neurodynamics has done other general digital and analog designs such as:
Design of a high speed serial board for T1/E1 lines for use in a custom backplane environment (1995)…
Another product design with 10 base T & AUI capability (1995)…
A 16 channel high speed serial SBUS card (1996)…
A Pentium 200 MMX with 512 Meg & 512K Cache, SCSI-2, SVGA and the TRITON II chip set according to the PICMG standard (1996-7)…
A PCI target card for DMA data acquisition in a stereo CT scanner application (1997)…
A Microcontrollers with an FPGA Microsequencer for disk drive test applications (1997)…
A Microcontroller for fuzzy algorithms for monitor and control of a 20 analog channel system (1997-8)…
Design of a 450 MHz, 1 Gig Pentium II for embedded control (1998)…
I2C Masters & Slaves (1998)…
100 MHz data stream deinterleaver (1998-9)
OC12 FO data stream control for an ATM switching environment and related projects (1999)Research on Neural Communication Systems including custom bus design, protocols, routers, filters, neural coding schemes and related issues is ongoing.  [ 10, 12-15]
Experienced with a variety of CAD/CAE tools including ORCAD Schematic Capture and HDL for PLD design, ViewLogic Pro Series Schematic Capture, Xilinx FPGA Design Tools (just moved up to Foundation Series), Cypress WARP VHDL CPLD tools, Altera FPGA Design tools (Magnum), and PADs Power PCB for PCB layout. Experienced with several bus standards including working group activities [2-6, 11, 20]: VME, ISA(AT), SCI, Fastbus, SBus, PCI, PICMG, Multibus.  Have a knowledge of high speed techniques including signal termination and EMI precautions.
27 years professional experience (only last 15 summarized here) plus many more as a workng student.  See the resume in another section for further details.
Menu…

Publications, TRs, and References (annotated):
1. Abraham & Shaw, Dynamics: The Geometry of Behavior (Vol. I-IV ), Aerial Press, (Santa Cruz), 1984.  [A highly visual introduction to the notions of Dynamical Systems.]
2. Deiss, S., Downing R.W., Gustavson, D.B., Larsen, R.S. , Logg, C.A., Paffrath, L.,” Applicability of the Fastbus Standard to Distributed Control,” presented at the Particle Accelerator Conference in Washington, D.C., 1981.  Also SLAC-PUB-2703, Stanford Linear Accelerator Center, Menlo Park.  [Overview of Fastbus.]
3. Deiss, S., “A Fastbus Controller Using a Multibus MPU,” Proceedings of the Nuclear Science Symposium in Washington, D.C., IEEE, (New York), 1982.  Also SLAC-PUB-2994, Stanford Linear Accelerator Centor, Menlo Park.  [Described the SLAC Fastbus controller which adapted any Multibus master board to the 10K ECL Fastbus.]
4. Deiss, S., Gustavson, D.B., ” Software for Managing Multicrate FASTBUS Systems,” Proceedings of the Nuclear Science Symposium in Washinton, D.C., IEEE, (New York), 1982.  Also SLAC-PUB-2995, Stanford Linear Accelerator Centor, Menlo Park.  [Detailed description of a microprocessor algorithm that used VM simulation to manage a very large data acquisition system data base for HEP experimental galleries.]
5. (Deiss, S. as one Working Group Member), IEEE Standard FASTBUS Modular High-Speed Data Acquisition and Control System (ANSI/IEEE 960-1986), IEEE, (New York), 1986.  [Contributed software for initializing and managing multi-segment data acquisition systems including a solution to the broadcast tree initialization problem.]
6. (Deiss, S. as one Working Group Member), IEEE Standard for a Versatile Backplane Bus: VMEbus (ANSI/IEEE 1014-1987),IEEE, (New York), 1987.  [Reviewed for final ballot and helped set up VITA Trade Association User Groups.]
7. Deiss, S., “Artificial Neural Systems Engineering and Analysis,” abstract of poster in Proc. 1st Annual Meeting of the International Neural Network Society in Boston, Pergamon, (Boston), 1988.  Available as TR on request.  [Discussed a range of issues including need for understanding sequential cognitive processes via dynamics.]
8. Deiss S., & Works, G., “Application of SAIC’s Neurocomputer and Neural Networks Software,” invited paper in Proc. 4th Annual Artificial Intelligence and Advanced Computing Technologies Conference in Long Beach (Murray Teitell, ed.), Tower Conference Management, Glen Ellyn, Ill., 1988.  [Emphasized pattern recognition based on sub-symbolic computation (as in neural nets rather than rule-based systems) as a better approach for AI.]
9. Deiss, S., Hicks, W., Kasbo, R., Morse, K., Muenchau, E., Works, G., “The SAIC Delta Neurocomputer Architecture,” abstract of invited talk in Proc. 1st Annual Meeting of the International Neural Network Society in Boston, Pergamon, (Boston),1988.  See also US Patent No. 4,974,146 entitled “Array Processor,” by G. Works et. al., Nov. 27, 1990.  [Talked mainly about neurocomputer ‘specsmanship’ which continues to be a glass bead game for braggarts.]
10. (Deiss, S. as Study Group Chairman), “Neural Systems Interface and IEEE Standards,” a report to the IEEE Microprocessor Standards Committee, 1989. Available as TR on request.  [Suggested directions and constraints on future use of Futurebus+ and Scalable Coherent Interface in neurocomputing applications.]
11. (Deiss, S. as one Working Group Member), IEEE Standard for Scalable Coherent Interface (SCI) (ANSI/IEEE 1596-1992),IEEE, (New York), 1992.  [Contributed to definition of a broadcast capability with NN applications in mind.]
12. Deiss, S., “Communication Architectures for Large Neural Network Implementations,” abstract of poster in Program for Snowbird Neural Networks for Computing Conference, AT&T, 1993.  [Showed how to simulate a large network of spiking neurons with a raster version of the space-time-attribute code. No paper.]
13. Deiss, S., “Event Broadcast Speed, Latency, and Variability in VLSI Neuromorphs,” abstract of poster in Program for Snowbird Neural Networks for Computing Conference, AT&T, 1994.  [Explored problems of TDMA bus use. No paper.]
14. Deiss, S., “Temporal Binding in Analog VLSI,” poster in Proc. World Conference on Neural Networks in San Diego, INNS, (Boston), 1994.  [Looks at time representation issues from multiple levels.]
15. Deiss, S., “Connectionism without the Connections,” invited paper in Proc. World Congress on Computational Intelligence in Orlando, IEEE, (New York), 1994.  [Explains space-time-attribute code and timing considerations in scaling large neural networks of spiking neurons.]
16. Domany, E., van Hemmen, J.L. Schulten, K., Models of Neural Networks II, Springer-Verlag, (New York), 1994.  [Subtitled: Temporal Aspects of Coding and Information in Biological Systems; good coverage of the coherent firing view of networks with excellent contributors.]
17. Graf, H.P., Janow, R., Nohl, C.R., Ben, J., “A Neural Network Board System for Machine Vision Applications,” in Proc. International Joint Conference of Neural Networks in Seattle, IEEE, (New York), 1991.  [Describes wire-wrap proto later improved and redone with multiwire technology.]
18. Hebb, D.O., The Organization of Behavior , Wiley, (New York), 1949.  [A brilliant neural theory for its time sometimes characterized by the tough-minded as a lucky guess.]
19. Padulo, L., Arbib, M.A., System Theory , Saunders, (Philadelphia), 1974.  [Well balanced basic text.]
20. Paffrath, L. et al., “FASTBUS Demonstration Systems,” invited paper in Proceedings of the Nuclear Science Symposium in San Francisco, IEEE (New York), 1981. Also SLAC-PUB-2835, Stanford Linear Accelerator Center, Menlo Park.  [Describes FASTBUS demos given at NSS Meeting including software detailed in 4 above.]
21. Sackinger, E., Boser, B.E., Bromley, J., LeCun, Y., Jackel, L.D., “Application of the ANNA Neural Network Chip to High-Speed Character Recognition,” in IEEE Trans. on Neural Networks , Vol. 3, No. 3, May 1992.  [Shows the VME version of the 1st ANNA design.]
22. Sackinger, E., Boser, B.E.. Jackel, L.D., “A Neurocomputer Board Based on the Anna Neural Network Chip,” in Advances in Neural Information Processing Systems 4 , (Moody et. al., ed.), Morgan Kaufman, San Mateo, 1992.  [More on the VME and the ISA/AT ANNA boards.]
23. Sackinger, E., Graf, H.P., “A System for High-Speed Pattern Recognition and Image Analysis ,” in Proc. of the 4th International Conference on Microelectronics for Neural Networks and Fuzzy Systems in Turin, IEEE, (New York), 1994.  [2nd generation ANNA board described.]
24. Sackinger, E., Graf, H.P., “A Board for High-Speed Image Analysis and Neural Networks,” in IEEE Trans. on Neural Networks , Vol 7, No. 1, Jan 1996.  [More on high speed sequencer ANNA board applications.]
25. Tam, S., Holler, M., Brauch, J., Pine, A., Petterson, A., Anderson, S., Deiss, S., “A Reconfigurable Multi-Chip Analog Neural Network; Recognition and Back-Propagation Training,” in Proc. International Joint Conference on Neural Networks in Baltimore, IEEE, (New York), 1992.  [Describes ETANN Multichip Board and its use.]
26. Tomlinson, M.S., Walker, D.J., Sivilotti, M.A., “A Digital Neural Network Architecture for VLSI,” in Proc. International Joint Conference of Neural Networks in San Diego, Lawrence Earlbaum, (Hillsdale, NJ), 1990.  [Two boards were done for the DNNA chip architecture described here.]
27. Tomlinson, M.S., “ORINCON’s VLSI Chip Executes Neural Nets Faster,” in “The Wave” by Orincon Corp., (San Diego), Nov./Dec. 1991.  [Board was done for Orincon’s DARPA contract. Similar to DNNA chip.]
28. Sheu, B., Choi, J., Chang, R., Neural Information Processing and VLSI, Kluwer, (New York), 1995.  [See the section on the Silicon Cortex (SCX) board.]
29. Deiss, S., Douglas, R., Whatley, A., “A Pulse-Coded Communications Infrastructure for Neuromorphic System,” in Pulsed Neural Networks, W. Maass, ed., MIT Press, 1999.  [Good Overview of SCX with emphasis on biological motivation behind it.]
30. Fujii, H., Ito, H., Aihara, K., Ichinose, N., Tsukuda, M., “Dynamical Cell Assembly Hypothesis – Theoretical Possibility of Spatio-temporal Coding in the Cortex,” in Neural Networks, Vol. 9, No. 8, p. 1303, Pergamon, 1996.  [A good contemporary overview of the viewpoint taken here.]Menu…

Related Web Sites:
0. You are at http://www.cerfnet.com/~deiss
1.   Koch’s Lab at CALTECH
2.   Mead’s Lab at CALTECH
3.   Douglas’ Lab in Zurich
4.   Sejnowski’s Lab at the SALK : Computational Neurobiology
5.   IEEE Neural Networks Council
6.   International Neural Network Society
7.   CNS Program at CALTECH
8.   USC Brain Project
9.   Assoc. for the Scientific Study of Consciousness
10.   Silicon Cortex Project
11.   Address Event Protocol
12.   Virtual Wires
13.  HRP on the Zero Point Fields and Inertia
14.  Shipov on the “Physical Vacuum”
.
.
99. Suggestions welcome…
Menu…

A Personal Note
My original passion was particle physics, cosmology, and math.   However, I had questions that went to the core of how we can know anything for sure, and this went beyond traditional physics or science.After a year spent studying philosophy, religion, theology and apologetics at a small midwestern seminary/college, I transferred to major in Philosophy at Michigan (mind-body, free will, consciousness) with a second major going in Psychology (cognitive, sensory, neural).  The philosophy and psych continued right up to the last minute when I switched tacks to prepare for graduate work in Computer Science at Purdue (AI, analogical reasoning, neural models).  Someone had told me that the ARMY used Philosophy majors for cannon fodder, and I was 1-A with a lottery number of 33 during the Viet Nam era. CS seemed like an expeditious compromise at the time.  With Induction Orders looming on the horizon, I tutored my way through advanced math, took advanced undergrad CS classes, and then raced through an MS in CS in 12 months. At both the undergrad and graduate level I took courses in Human Information Processing (now known as Cognitive Science) and in Neural Modeling of Psychological Processes (now known as Cognitive or Computational Neuroscience).  The draft orders were finally issued, but they were cancelled due to a troop pull-back.

To frame the times, note that this was right after Minsky and Papert published Perceptrons and neural networks went underground. AI was alive but still considered a little on the loony fringe of “Computer Science,” a field still struggling today to justify calling itself a science.  There were no related jobs outside academia.

Frustrated by the rigid disciplinary boundaries I had always been up against, I decided to try and impact education in my first job out as an Assistant Professor working in the area of Computer Aided Instruction with special interest in individualized instruction via what later became known as ‘PC’ platforms.  (There is nothing more frustrating than being told that you can not study something you are eager to learn about because you lack the prerequisites.  One should have the opportunity to study anything with the option to prove competence for credit on their own schedule.)  These were the days of big mainframe computing, but also of the new 8080 and the LSI-11.  The Dynabook prototypes were running at PARC.  Univ. of Illinois networked the world with PLATO and TUTOR. People talked of “hypertexts,” “computer liberation,” and “dream machines.”  These technologies are now taken for granted.  With the Web, perhaps one day we will only need schools for certification and proof of mastery, encouragement, mentoring, collaboration, and research – not so much for classroom instruction and lock step curricula.

After leaving the academic environment, I have knocked around and bootstrapped myself into electronic hardware design (originally so I could design my own CAI ‘PC’ hardware).  I’ve been doing hardware for the last 17 years after 10 years of mostly just software.  After Hopfield’s influential papers and then the PDP books came out (1985), I thought there might still be a matching niche for me yet. I did a quick career-warp right into Neurocomputing.

Other long-standing interests that I have are in Physics (Strings, Time, Gravitation & Inertia, Quantum Mechanics, and Cosmology), and scientific interpretation of age old views of the mind-body problem (Taoism, Hinduism, Zen).  I like reading about Theories of Everything (and I am working on a Theory of Everything Else).  Most recently I have been focused on learning about torsion field theory, zero point fields, and what that all says about unification of classical and quantum phenomena and maybe more.  Since this taxes my math background, I am reviewing advanced mathematical methods of physics.

To balance out these grandiose interests I hasten to add that I AM NOT a New Age groupie, I have never been abducted by a UFO, I never saw Bigfoot, and I’m not much interested in astral planes nor parapsychology.  I do have a life and a family with children too.  We  have a Ford and a Chevy.  If I ever ordered personalized license plates, one would read D4SBWDU and the other would read LSN2D4S.

My pet peeve, for obvious reasons, is the way some people prejudge any engineer as being narrow, linear, and generally unenlightened.  This seems to be a common ego defense mechanism (as are most forms of prejudice) whose origin would make an interesting PhD thesis topic.  Obviously, this hostility is misdirected and would be more appropriately aimed at Doctors, Lawyers and Politicians.    😉

Books that have shaped or warped me are:

Einstein: Creator and Rebel by B. Hoffman
Revised Coherence Methodology by Victor Mathews
The Theory of Cognitive Dissonance by Leon Festinger
The Transcendence of the Ego by Jean-Paul Sartre
The Organization of Behavior by D. O. Hebb
On Being Free by Frithjoth Bergmann
What Computers Can’t Do by Hubert Dreyfus
Introduction to Systems Philosophy by Ervin Laszlo
Memory and Attention by Don Norman
The Way of Zen by Alan Watts
Zen and the Art of Motorcycle Maintenance by Robert Pirsig
Understanding Zen by Radcliff & Radcliff
The Wispering Pond by Ervin Laszlo
The Elegant Universe by Brian Green
Philosophy in the Flesh by George Lakoff

(HTML Resume available for clients and headhunters)

Menu…