What is Artificial Intelligence?

Despite becoming increasingly ubiquitous, researchers still struggle to provide a single definition of what artificial intelligence entails

IMAGE CREDIT: Zach Lucero on Unsplash

In 1956 one of the field’s founding fathers – the computer scientist John McCarthy – assembled a cross-discipline team of researchers with the ambition of unifying the emerging – but often diverging – theories surrounding ‘thinking machines.’  Established to prove the following thesis, ’the Dartmouth Summer Research Project on Artificial Intelligence’ went on to become the defining moment in the field’s early history:

‘The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.’

Over 60 years later, agreement upon a single definition of AI remains elusive, however most agree that it involves describing human-like intelligence in a manner which computers can interpret and emulate. Amazon, for instance, defines AI as ‘the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition.’  AI, in other words, attempts to define and describe intelligence in a manner which a computer may understand.

ADVERTISEMENT

If this is the goal of AI researchers, then what exactly will success look like?  Replicating human-like cognitive functions in a machine will require the development of an algorithm which defines intelligence in a format a computer is able to understand and interpret.  Comprising billions of tiny switches known as transistors, computers perform logic operations by turning these on and off in sequence, as defined by the precise set of instructions contained within the algorithm.  The state of each transistor represents a single bit of information – one if the transistor is on and zero if off.  The complexity of these vary, from the very simple – flipping a single transistor from on to off – to the complex – effecting a change in state of billions of transistors each second through the changing of the states of their local counterparts.  In both these instances – and for everything in between – the algorithm details how the computer should act.

Though hundreds of new algorithms are developed annually, all are based on the same set of ideas.  In each instance, their basic functionality can be boiled down to just three operations – AND, OR and NOT.  It is this process of transistors changing their state in response to one another which ‘father of information theory’ Claude Shannon described as reasoning.  Assuming computers can reason, then it naturally follows that they should also be able to learn. 

RELATED ARTICLES

IN FOCUS: SPACE, TIME AND HUMAN COMPLEXITY

DIFFERENT TYPES OF COMPLEXITY CAN EACH HAVE A DRAMATIC IMPACT ON OUR ABILITY TO SUCCESSFULLY REALISE A TRUE GENERAL INTELLIGENCE 

BY CHRISTOPHER KELLY

FEATURED ARTICLES
ADVERTISEMENT
FEATURED ARTICLES
ADVERTISEMENT