Abstract within the relevant time span. Introduction In this

 

Abstract

To better understand AI, I will be exploring its
history and development, and how it has come to be what it is today, as without
this rather turbulent history and challenging history, there would be no present
or future of Artificial Intelligence. I will be exploring the post WWII decade
in which AI began to find its feet as a theoretical and scientific concept. I
will then discuss the subsequent decades and what they meant for AI and Machine
Learning, specifically referring to the key events and discoveries within the
relevant time span.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

Introduction

In this research paper I will be exploring the
history of Artificial Intelligence (AI) and Machine Learning and the key events
which shaped the field. I will be focusing on the key events of AI, beginning
with the Post WWII era and following on with subsequent eras and the key events
within those eras. I will be linking notable key events to what they did for AI,
and how these great minds of the past helped shape the present, and future.  

 

Section 1.  Post WWII (1945-1955)

As futuristic as it may seem, the field of AI can
track its roots to the post WWII era. Before 1945 when the field was truly
opened, it was in 1943 when Warren S. McCulloch, a neuroscientist, and Walter
Pitts, a logician, published “A logical calculus of the ideas immanent in
nervous activity” in the Bulletin of Mathematical Biophysics 5:115-133. In
this paper McCulloch and Pitts tried to understand how the brain could produce
highly complex patterns by using many basic cells that are connected (neural
network). These basic brain cells are called neurons, and McCulloch and Pitts
gave a highly simplified model of a neuron in their paper. (Russell and Norvig,
2010, p.16)

The work of
McCulloch and Pitts was then developed in 1951, when Minksy and Edmonds, both
undergraduates at Harvard, built a computer based on the idea of McCulloch and
Pitts’ neural networks. The computer had only 40 artificial neurons and simulated
a rat trying to escape a maze, which would learn from previous mistakes and
dead-ends to find the exit. In today’s current climate, Minksy and Edmond’s
computer simulation would be considered nothing more than intelligent
programming, at the time their computer was a significant breakthrough into the
realm of AI and more-so machine learning,
and is widely considered as being the first artificial neural network (fig. 1).

Fig. 1 – A Neural
Network

 

In 1950, mathematician Alan Turing devised his now
famous ‘Turing test’ for machine intelligence. As Kourie, D.G. (1994) points
out, “the test requires that the computer system under test as well as a conspiring
human must interact anonymously with a testing human. If the tester is unable
to identify which interactions are due to the computer, then the computer
system deserves the label ‘intelligent” (p. 277).

 

Section 2.  The Takeoff (1955-1973)       

A key moment in the history of AI is The Dartmouth
Conference, organized by McCarthy and Minsky in 1956. The Dartmouth Conference lasted approximately 6 to 8 weeks, and was essentially an extended brainstorming
session. 11 mathematicians and scientists were originally planned to be
attendees, and while not all attended, more than 10 others came for short times. It
was here that the term ‘artificial intelligence’ first appeared in print. At
the conference, McCarthy proposed:

‘a
study of artificial intelligence … (which should) proceed based on the
conjecture that every aspect of learning or any other feature of intelligence can
be so precisely described that a machine can be made to simulate it.’
(Charniak E. and McDermott D. 1985).

It was this statement that set the course
for the exploration into Artificial Intelligence, as not only did it mark the
ability to ‘learn’ as a key and necessary aspect to intelligence, it also implied
that if Human Intelligence could be understood, then a computer could be made programmed
to mimic the intelligence. While this may sound logical, it was quite a groundbreaking
thing to say in the 50’s, for if Human Intelligence could be truly understood
and precisely described, it would make us nothing more than machines following
a certain set of algorithms. It was this incredibly pragmatic and somewhat
reductionist theory of the human-self coined by McCarthy that served, and
continues to serve as a divisive and polarizing undertone into the research of
Artificial Intelligence today.                                                                                                                         Two
years later, in 1958, computer scientist John McCarthy develops the LISP
language, a programming language designed for AI. The opening paragraph of
McCarthy’s paper ‘Recursive Functions of
Symbolic Expressions and Their Computation by Machine, Part I’ which was
published in 1960 states:

“A
programming system called LISP (for LISt Processor) has been
developed for the IBM 704 computer by the Artificial Intelligence group
at M.I.T. The system was designed to facilitate experiments with a proposed
system called the Advice Taker, whereby a machine /../ could exhibit “common
sense” in carrying out its instructions.” (McCarthy, J. 1960)

While the LISP language effectively went
extinct in the 80’s, in McCarthy’s paper there are many programming concepts which
are now the norm (Teller, S. 2012).                           His
following paper however, ‘Programs with
Common Sense’ is believed to be by many the beginning of Artificial Intelligence
as a theoretical concept as we know it today. Once again, similarly to Recursive Functions of Symbolic Expressions and
Their Computation by Machine Part 1, McCarthy’s second published paper also
revolves around a hypothetical computer system called the ‘Advice Taker’, whose
performance could improve over time as a result of receiving advice, rather
than being reprogrammed. The main contribution of the paper was perhaps the (at
the time) revolutionary logical methodology it suggests, such as:

“We
base ourselves on the idea that in order for a program to be capable of learning
something, it must be capable of being told it” (McCarthy, 1959).

This statement essentially refocused the goals
of the exploration into AI, from trying to design a system which can process
thought to designing a system which can first ‘receive’ thought and then process
it accordingly. Therefore, shifting the focus from creating a program which can
create, to creating a program which can first learn; then create. This was a
revolutionary shift in the thinking behind the approach to AI in the years that
followed.                                                                                                     During
the following decade, there was a real sense of enthusiasm and excitement among
not only scholars, but also investors. Technological firms like IBM, DARPA and other
computing companies began investing in research projects in the hopes of being
the first to capitalize on the technological breakthrough.

 

Section 3.  The AI Winter(s) (1974-1993)

It wasn’t until 1974 that the outlook on
the feasibility of Artificial Intelligence and Machine Learning changed. AI was
no longer an exciting, lucrative taste of the future and new life, but instead
it was more of an incredibly expensive frustration to both scholars and
investors which was yielding little results.                                           Throughout
recent history, the field of AI has experienced multiple hype-cycles, in which
at first there is a sense of excitement and motivation from scholars at the
prospect of AI, in which investors aren’t long noticing and then fund research
into AI. After a while though, the research fails to produce sufficient results
due to technological restrictions and the investors then pull funding, thus
halting the research. This cycle happened on a large scale on two occasions,
1974-1980 and 1987-93. The main reasons behind these ‘AI Winters’ are as follows.

 

Section 3.1 The Lighthill Report

In 1973, professor Sir James Lighthill was
asked by the UK Parliament to evaluate the state of AI research in
the United Kingdom. His report, now called the Lighthill report, criticized the
utter failure of AI to achieve its “grandiose objectives.” He
concluded that nothing being done in AI couldn’t be done in other sciences. He
specifically mentioned the problem of “combinatorial explosion” or
“intractability”, which implied that many of AI’s most successful
algorithms would grind to a halt on real world problems and were only suitable
for solving “toy” versions. (Lighthill, J. 1973) The report led to
the complete dismantling of AI research in England, and created a wave of which’s
ripples were felt across all of Europe.

 

Section 3.2 DARPA & Speech Understanding
Research

DARPA was deeply disappointed
with researchers working on the Speech Understanding Research program at
Carnegie Mellon University. DARPA had hoped for, and felt it had been promised,
a system that could respond to voice commands from a pilot. The SUR team had
developed a system which could recognize spoken English, but only if the words were spoken in a
particular order. DARPA felt it had been duped and, in 1974, they
cancelled a three million dollar a year grant.

Many years later, successful
commercial speech recognition systems would use the technology developed by the
Carnegie Mellon team (such as hidden Markov models) and the market for
speech recognition systems would reach $4 billion by 2001. (Juang, B.H and Rabiner, L. 2004).

 

Section 3.3 The Fifth Generation Computer
Project

In 1982, the Ministry of International
Trade and Industry started the Fifth Generation Computer Project. This project
was initiated based on the Japanese national policy of developing as a
technologically advanced nation while making an international contribution, and
was called the “Fifth Generation Computer Project”. Their objectives
were to write programs and build machines that could carry on conversations,
translate languages, interpret pictures, and reason like human beings. By 1991,
the impressive list of goals penned in 1981 had not been met. Approximately
$400 million was invested in the project over 11 years before coming to an end
in 1992. (Moto-oka, T. 1982).

 

It was these events (to name but a few)
which led to the AI Winter of 1974-1993. The climate at the time had grown sick
of failure and false-promises. Side effects of the AI winter continued for many
years, up until roughly 2010 the effects could still be felt. Many researchers
in AI in the mid-2000s deliberately called their work by other names, such
as informatics, machine learning, analytics, knowledge-based systems, business
rules management, cognitive systems, intelligent systems, intelligent
agents or computational intelligence, to indicate that their work
emphasizes particular tools or is directed at a particular sub-problem.
Although this may be partly because they consider their field to be
fundamentally different from AI, it is more than likely that the new names helped
to procure funding by avoiding the stigma of false promises attached to the
name “artificial intelligence.” (Markoff, J.
2005)

 

Section 4.  Progress (1995-Present)

As the ice from the AI Winter began to thaw and excited,
enthusiastic researchers began to receive funding once again, real progress started
being made.

On 11 May 1997, Deep Blue became the first
computer chess-playing system to beat a reigning world chess champion, Garry
Kasparov. The super computer was a specialized version of a framework
produced by IBM, and was capable of processing twice as many moves per second
as it had during the first match (which Deep Blue had lost), reportedly
200,000,000 moves per second. The event was broadcast live over the internet
and received over 74 million hits. (McCorduck, P. 2004)                                                Only 14 years later, in
February 2011, in a ‘Jeopardy! quiz show exhibition match, IBM’s question
answering system, Watson, defeated the two greatest ‘Jeopardy!’
champions, Brad Rutter and Ken Jennings, by a significant margin.
(Markoff, J. 2011). The
incredible thing about this is, in only 14 years we went from having a program
which could beat a world-champion at chess, to having a program which could
beat two world champions on a general knowledge quiz show.                                                                       These
successes were not due to some revolutionary new paradigm, but mostly they were
due to the tedious application of engineering skill and on the tremendous power
of computers today. (Kurzweil,
R. 2005). 

 

Conclusion

Without going into too much detail, nowadays in 2018,
AI has expanded and developed to the point that it is in use in nearly every
industry, and we use AI in a multitude of different ways to serve a multitude
of different purposes. Netflix uses AI when choosing a list of films you may
like, Microsoft Word uses AI to correct your spelling and Google uses AI to
translate foreign languages. While none of these examples are true, human-level
Artificial Intelligence, they are all forms of machine learning and developments
of McCarthy and Minsky’s original visions of AI, and therefore should be looked
at with respect and admiration, as the road to what we have now has been rocky
and frustratingly cyclic. It is thanks to history that we have the present, and
can look forward to the future.