Now that Containerhouse.AI v. 1.0 is live, we can focus on v. 2.0, which incorporates machine learning algorithms. Here's a screenshot of some of our testing. Oh, and we're running inside docker containers (fitting for a container company, we think)!
The code for Containerhouse.AI™ has been approved by the SketchUp team for listing in the Trimble SketchUp Extension Warehouse. After some business paperwork, it will appear in our store here: http://extensions.sketchup.com/en/user/10099891/store
The philosophical school stemming from Aristotle says the proper object of human intellect is intelligible form existing in sensible data (broadly termed images hereafter). Our senses receive material input from the here and now, and subsequently abstract laws, mathematical structures, and other intelligible essences from the temporospatial data (which data always diverges from the ideal).
Not only does the whole intellectual process begin from sensible data, but it is continually impossible apart from images, whether mental or in the external world. We move from images to words, and we also move from words back to images in the course of understanding those words.
In the Middle Ages, Thomas Aquinas spoke of a conversio ad phantasmata. To understand concepts (expressed in words), we must convert these to images, as mentioned above. To understand the word ‘circle’, for instance, we must imagine (or draw) a circle--it can be red, slightly dented along the circumference, showing more or fewer radii (or none), etc., just as long as the matter is sufficiently informed to communicate the intelligible entity (in the case of the circle, a locus of coplanar points equidistant from a center). Such images are refined as our understanding deepens and concepts are refined.
A fuller mental process sketched from philosophy is thus: when someone says the word ‘circle’ to me, my agent intellect (active) summons a phantasm, or mental image, which impresses the ‘form’, or ‘intelligible species’ onto my possible intellect (passive). This last describes the phenomenon known as ‘insight’, and subsequent to this act of understanding there proceeds in the mind an ‘inner word’, to which outer, spoken and written words refer. Incidentally, linguists such as Noam Chomsky talk about complex, hierarchical inner language forced into linear sentences by the physical nature of our speech.
On this telling, natural language algorithms that scan documents for vectors of words are not understanding like human beings. Perhaps the first reaction in the brain to words is indeed association with other words, but we quickly move to images to supply context.
It would be interesting to hear from brain scientists what might be this ‘agent intellect’ that raises an unlimited number of mental images in the course of coming to know a thing. Perhaps there is a biological algorithm to superimpose (with a degree of random aggregating) partial representations of the world (such as those proposed by Jeff Hawkins in the sensory-motor context) to produce a suitable image.
In the twentieth century, Bernard Lonergan said the history of philosophy is the “oversight of insight.” An example he often cited was Proposition 1 of Euclid’s Elements, which involves the construction of an equilateral triangle. The Proposition contains an unacknowledged insight that went mostly unnoticed for more than two-thousand years, namely that the two circles are said to intersect at a point ‘C’, yet there is nothing in Euclid’s definitions, axioms, or postulates leading one to conclude that the circles must intersect. Human intelligence necessarily sees this ‘must’ in the image, and for those many centuries each human agent reading Euclid had the insight and understood the proof, despite Euclid’s lack of rigour.
The data supplied to machines by human beings in everyday situations will always be less than rigorous and contain manifold, unexpressed insights, partly because our speech regularly relies on common sense (i.e. shared insights held by a community of people), and partly because we are often not even consciously aware of the insights contained in our knowledge, and hence those only implicitly expressed in communicated instructions, desires, etc.
Until a machine not only remembers things it has seen (by encoding an input and comparing on-bits to those of previously stored representations) but freely imagines new things--and abstracts from those things--we are not dealing with general intelligence in machines (at least not an intelligence fully analogous to human intelligence, nor as autonomous).
Our extension for SketchUp is nearing release. What remains is mostly HTML formatting.
Shown in the video below is the way in which the control panel is now used to operate the Ruby script. Length, width, and height of the building can be set, space blocks can be inserted, and options are given for fixed x and y coordinates and degrees of rotation. Most importantly, users can now select components and quantities from the import folder to arrange inside the desired space.
Stay tuned for news of formal release in the Trimble SketchUp Extension Warehouse. To see more demos, please visit booth 4225 at OTC 2016 next week, May 2-5.