A response to HTM Minute: What is Intelligence? from Numenta by George Vernau

The philosophical school stemming from Aristotle says the proper object of human intellect is intelligible form existing in sensible data (broadly termed images hereafter). Our senses receive material input from the here and now, and subsequently abstract laws, mathematical structures, and other intelligible essences from the temporospatial data (which data always diverges from the ideal).

Not only does the whole intellectual process begin from sensible data, but it is continually impossible apart from images, whether mental or in the external world. We move from images to words, and we also move from words back to images in the course of understanding those words.

In the Middle Ages, Thomas Aquinas spoke of a conversio ad phantasmata. To understand concepts (expressed in words), we must convert these to images, as mentioned above. To understand the word ‘circle’, for instance, we must imagine (or draw) a circle--it can be red, slightly dented along the circumference, showing more or fewer radii (or none), etc., just as long as the matter is sufficiently informed to communicate the intelligible entity (in the case of the circle, a locus of coplanar points equidistant from a center). Such images are refined as our understanding deepens and concepts are refined.

A fuller mental process sketched from philosophy is thus: when someone says the word ‘circle’ to me, my agent intellect (active) summons a phantasm, or mental image, which impresses the ‘form’, or ‘intelligible species’ onto my possible intellect (passive). This last describes the phenomenon known as ‘insight’, and subsequent to this act of understanding there proceeds in the mind an ‘inner word’, to which outer, spoken and written words refer. Incidentally, linguists such as Noam Chomsky talk about complex, hierarchical inner language forced into linear sentences by the physical nature of our speech.

On this telling, natural language algorithms that scan documents for vectors of words are not understanding like human beings. Perhaps the first reaction in the brain to words is indeed association with other words, but we quickly move to images to supply context.

It would be interesting to hear from brain scientists what might be this ‘agent intellect’ that raises an unlimited number of mental images in the course of coming to know a thing. Perhaps there is a biological algorithm to superimpose (with a degree of random aggregating) partial representations of the world (such as those proposed by Jeff Hawkins in the sensory-motor context) to produce a suitable image.

In the twentieth century, Bernard Lonergan said the history of philosophy is the “oversight of insight.” An example he often cited was Proposition 1 of Euclid’s Elements, which involves the construction of an equilateral triangle. The Proposition contains an unacknowledged insight that went mostly unnoticed for more than two-thousand years, namely that the two circles are said to intersect at a point ‘C’, yet there is nothing in Euclid’s definitions, axioms, or postulates leading one to conclude that the circles must intersect. Human intelligence necessarily sees this ‘must’ in the image, and for those many centuries each human agent reading Euclid had the insight and understood the proof, despite Euclid’s lack of rigour.

The data supplied to machines by human beings in everyday situations will always be less than rigorous and contain manifold, unexpressed insights, partly because our speech regularly relies on common sense (i.e. shared insights held by a community of people), and partly because we are often not even consciously aware of the insights contained in our knowledge, and hence those only implicitly expressed in communicated instructions, desires, etc.

Until a machine not only remembers things it has seen (by encoding an input and comparing on-bits to those of previously stored representations) but freely imagines new things--and abstracts from those things--we are not dealing with general intelligence in machines (at least not an intelligence fully analogous to human intelligence, nor as autonomous).

(Reference HTM Minute: What is Intelligence? from Numenta below)

Containerhouse.AI™ 1.0 Demo by George Vernau

Our extension for SketchUp is nearing release. What remains is mostly HTML formatting.

Shown in the video below is the way in which the control panel is now used to operate the Ruby script. Length, width, and height of the building can be set, space blocks can be inserted, and options are given for fixed x and y coordinates and degrees of rotation. Most importantly, users can now select components and quantities from the import folder to arrange inside the desired space. 

Stay tuned for news of formal release in the Trimble SketchUp Extension Warehouse. To see more demos, please visit booth 4225 at OTC 2016 next week, May 2-5.

Containerhouse.AI: Next Stage by George Vernau

We continue to develop the code which will allow a machine to determine the optimum layout for a desired space. At this stage, our program chooses coordinates and rotations that are completely inside the finished room and away from the middle. It also iterates through hundreds of combinations to make sure none of the components intersect (overlap)--something generally impossible in physical space! Most of the work up till now is learning the SketchUp API so as to manipulate the model via Ruby code. Next step for us is to develop neural networks by which the computer will 'learn' to arrange components with no instructions other than available methods and a variety of success functions.

Insights into neuroscience provided by computer programmers by George Vernau

Jeff Hawkins, the inventor of the PalmPilot, now works on artificial intelligence. We're fascinated to observe they way in which his team's attempts to model a machine on the human brain provides insights into the workings of the same.

The link below is to their most recent paper, archived at Cornell University:

Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex by Jeff Hawkins and Subutai Ahmad