Sunday, 21 June 2015

New paper “Neural foundations for the classification of AGi and Superintelligent systems”


Abstract

A proposal is made to justify the utilization of simplified models for self awareness to be used as a classification for Artificial General Intelligence (AGi) and Superintelligent (Si) systems. These models are derived from entire neural topologies and their respective neural markers such as cognitive processes and biophysical signals. Self Awareness is defined generally and then in network terms. Current proofs for AGi-Si development are reviewed and these cast doubt on the predictive power of current algorithmic methods to guide the control and understanding of AGi-Si development. The benefit of computational neuroscience methods are expanded upon further in terms of their detail and depth representing likely actual AGi-Si development. It is concluded that evidence exists to justify exploring the use of general guiding frameworks for AGi-Si classifications which are derived from computational neuroscience”

This paper was inspired by the press hype around dangers from artificial intelligence and summarizes some of the ideas I have on whether brain structures can tell us anything deterministic about the nature of general intelligence. For that we need to look at various proof systems.  It is proposing a classification system for all general intelligence system may be deterministic and was submitted as part of a research grant application to FHI Oxford.  The central concept disagrees with FHI press position to some degree. Primarily because my work tells us simplifications of brain structures tell us something pivotal about the dual process nature of general intelligence. And not only that, but uf general intelligence has an optimal physical form then AGi has certain types of topology.   The work is still in rough shape. I will upload to ARXIV when I Iron out some of the conclusions and re-do the proofs. It has some rough similarities to “TheUniverse of Minds” by  ROMAN V.YAMPOLSKY in terms of reference to mind classifications and computational equivalence. I wasn’t aware of his paper till later however, but my proposal is still different in that I insist we impose a physical grounding, especially so for self improving systems that will have more of an issue when dealing with physical limitations.

Digging out more from MHD brain theory for general computation

Although the first general framework was sketched in causallogic terms in the 2013 paper this framework is a general physics grounding, and it was always stated the MHD theory would have to get the game up and explain more complicated implementations at multiple scales in brain structure and function. This is a big project in progress right now, but its over-due to make some general statements on where it is going, without getting into which neural coding schemes are being evaluated.

If we look at the most prominent neural processing features, they scale across three primary neural levels. In order in diagram above (Hausers dendritic computations)  1. The logical NAND functions and filters within neurons. 2. These can then form entire libraries of logical arrays and analogue style resonant filter banks at the population level (some of the classifications izhikevich)  3. Across the entire brain at macroscopic level general filtering is thought to occur in multiplexed action selection, where the hemispheres increase speed of switching sides to deal with more difficult problem tasks.

None of this reveals any general coding scheme in detail, and I propose we will require physics models for that. For now there what we comment on is that the highest level general multiplexing in action selection completely belies the massive number of underlying neuron banks capable of doing something similar.  There are analogies to these filter banks and logical physical components used in Deep learning, not surprising considering the neural roots of deep learning.   What is going on in the coritco-limbic “information engine” at the overall level is still something many of us are working on. We have some rough ideas though and suspect large scale architecture is the key to our general abilities on deep problems.   

There is a body of literature to suggest that human performance on NP problems is good although not optimal approximate results do occur (See Humans on NP in references).  Knapsack, Travelling sales problem and graph colouring are something we evolved for to travel, hunt and deal with finite resources.  It also appears that in comparison to the same difficult level on P space problems (i.e number of nodes) we may be better at NP problems, which is the inverse for classical von Neumann architecture (with the same working memory to node ratio). So we may be using a generalized NP engine for all problems including P-space.  However this is a complicated area and contentious to propose right now, as the landscape of computational complexity had many overlapping facets.


What we do know is the brain architecture is very different due to its parallel topologies. The class of NP problems is also very amenable to parallel matrix computations and quantum computers also leverage parallelism. i.e. Hyperconnected quantum states provide traction on these dense graph type problems, which is not surprising (left and middle in image above). However classical computer architecture is also evolving towards similar hyperconnected states (supercomputer toroidal setups, right in image above) so it could be there is no mystery about quantum computing, as it basically facilitates hyperconnected states and we then leverage this at some given resolution. The gap could be currently closing between the two broad classes of serial and parallel hardware. What does this mean for the brain ? As I have stated repeatedly on my papers and this site for too many reasons to go into, it is not a quantum computer. But the fact is has a magnetic structure (which has quantum structure) arising in neurodevelopment has endowed it with hyper-connected parallelism as part of a hybrid entropy-action system (derived from white/grey matter respectively)

What we can see is that the corpus callosum has the toroidal structure. And the limbic system also has similar network properties (see this summary).   If we look at the association areas of the brain they are wide ranging and use the largest white matter loops. This probably facilitates wide across network breadth searches while also maintaining columns with local order. So even with massive internal complexity that grows across species, this basic magnetic structure type physics via billions of connections through axon solitons, allows overall computational coherence and fast, synchronized integration of signals. The coding scheme itself we are still figuring out, there are many candidates to be tested. The good news for this project is that MHD structure does reveal one of the most powerful natural coding schemes known. This will be highlighted in a future publication.


REFERENCES

Discrete optimization using quantum annealing on sparse Ising models
AI-Complete, AI-Hard, or AI-Easy: Classification of Problems in Artificial

Human performance on NP problems
Human Performance on Hard Non-Euclidean Graph Problems: Vertex Cover
Measuring Human Performance on Clustering Problems: Some Potential Objective Criteria and Experimental Research Opportunities
Human Performance on the Knapsack Problem
Human performance on the traveling salesman problem. Percept Psychophys. 1996 May;58(4):527-39.
MacGregor, James N. and Chu, Yun (2011) "Human Performance on the Traveling Salesman and Related Problems: A Review," The Journal of Problem Solving: Vol. 3: Iss. 2, Article 2.
http://dx.doi.org/10.7771/1932-6246.1090
Neuro images
Application of bio-inspired algorithm to the problem of integer factorisation