It’s worth pausing for a moment to look back at the challenges we’ve encountered thus far in our pursuit of a true general intelligence.
The first challenge we encounter is that we need to be able to define a theory of learning that is rigorous enough to actually express as an algorithm. To date, even this has proven elusive, suggesting that we don’t yet understand learning well enough to be able to effectively ‘teach’ it to a machine. Once able to adequately define our theory of learning, we then need to express it in the form of an algorithm precise enough to ensure there is no ambiguity in its processing.
Finally, we need to ensure that the algorithm developed is not susceptible to space, time or human complexity – effects which could each impact the learner’s potential for success, regardless of how rigorous our central thesis may be. An additional challenge – one of controlling and containing any resulting intelligence – is a subject for a later discussion. For now, overcoming each of these initial challenges would be the first steps towards the development of a successful machine intelligence.
Both individually and collectively, these challenges have meant that researchers have so far failed to develop a true cross-purpose learner. Fortunately, the decades since the Dartmouth Summer Project have seen a handful of algorithms rise to the surface, which – though general purpose learners in and of themselves – have laid the foundations for many recent successes in the field.
Relatively simple programmes such as Naive Bayes, nearest neighbour and decision trees algorithms have been used in everything from spam filters to product recommendations, often replacing code in the several millions of lines. Rather than focusing on a specific task, these algorithms achieve their versatility by approximating functions arbitrarily closely through access to data. Though the volume of data required for algorithms such as these to learn a function may be infinite, that most functionality can indeed be replicated by these suggests that a single algorithm could, in theory, go on to learn everything which can be learned.