Space, time and human complexity

DIFFERENT TYPES OF COMPLEXITY CAN EACH HAVE A DRAMATIC IMPACT ON OUR ABILITY TO SUCCESSFULLY REALISE A TRUE GENERAL INTELLIGENCE

IMAGE CREDIT: Emily Morter on Unsplash

Any successful bid to develop an algorithm capable of true cross-domain learning must be capable of overcoming three major complexity challenges, each of which have the potential to dilute, contaminate or otherwise jeopardise any resultant output. Failing to overcome any one of these suggests that the model is, in fact, overly complex or ineffective. In line with what we’ve learned so far about learning algorithms, there’s almost certainly opportunities for any such model to be refined.

The first challenge to be overcome – that of space complexity – concerns the number of bits of information the computer must store in its memory to be able to process the algorithm.  Even the most powerful computers have processing limitations and; if the storage requirements demanded by the algorithm exceed these; then the model is too complex and should be discarded.  

Space related complexity can have a significant impact – not just on the likelihood of a generalised intelligence being developed but also in terms of the impact that any resulting intelligence may have on its environment.  Where related complexities result in errors creeping into the model, some researchers have warned that our entire universe could become compromised in a phenomena known as perverse instantiation. In this scenario, the resulting intelligence could end up transforming all matter in the universe into additional processing power, simply to extend its available memory for the purpose of addressing the task at hand.  Though perverse instantiation and dangers such as this will be the focus of future discussions, preemptively limiting the demands on a computer’s memory may help avoid catastrophe on a universal scale…!

Bearing similarities to space complexity, time complexity considers the time it takes for a machine to run an algorithm.  For a machine to process instructions it must use and reuse combinations of billions of tiny transistors.  As the number of steps involved in transforming data and other inputs into a desired output increases, so too will the time the algorithm takes to run.  If this is longer than we are prepared to wait then affected algorithms should also be discarded.

ADVERTISEMENTS

Beyond the effects which lengthy run times have on necessary development tasks such as debugging, algorithms which take months or even years to run will likely become irrelevant as less complicated models make iterative improvements over shorter generations.  A fairly simple learner capable of two-week cycles of iterative self-improvement is far more likely to result in success than a more complex, human-programmed algorithm which takes several years to run. This is true even if the output of the latter is several stages removed from that of early iterations of the former, simply because responsibility for addressing the question at the heart of machine learning – whether there is a ‘better’ way – has been handed off to the machine earlier on.  Once again, the benefits associated with pursuing simplicity is evident.

The last of the great complexity challenges – human complexity – has the potential to be the most damaging.  If an algorithm’s structure becomes so complicated that even its own developers are unable to understand how they got there, then the level of control they’re able to exert over it decreases exponentially.  Errors in the model’s programming may be hidden or not obvious, and the developer’s ability to identify bugs and to troubleshoot them is therefore limited at best.  Any resulting algorithm will likely not do what its developers intended, and – as the learner makes iterative ‘improvements’ to its own programming – control diminishes further as opacity increases.  If it does somehow end up working, then the replicability of the model’s results – and how reliable they may be – must inevitably come into question.  Though it remains a distant possibility, it seems unlikely that the realisation of a stable general intelligence will result from a happy accident.  

Both individually and collectively, these challenges have meant that researchers have so far failed to develop a true cross-purpose learner.  Fortunately for us, the decades since the Dartmouth Summer Project have, however, seen a handful of algorithms rise to the surface, which – though not general purpose learners in and of themselves – have overcome these, laying the foundations for many of the early successes emerging from the field.

RELATED ARTICLES

CREATING ARTIFICIAL GENERAL INTELLIGENCE

WHY INSTILLING A MACHINE WITH THE ABILITY TO LEARN COULD PROVE TO BE THE FOUNDATIONS FOR A TRUE ARTIFICIAL GENERAL INTELLIGENCE

BY CHRISTOPHER KELLY

FEATURED ARTICLES
ADVERTISEMENT
FEATURED ARTICLES
ADVERTISEMENT