Do you wonder about the intelligence demonstrated by robots and machines compared to natural human intelligence? Well then, let’s take a stroll through artificial intelligence's early history.
The intellectual roots of AI (artificial intelligence) started in ancient times, with myths, folk tales, and rumors of master artisans endowing artificial beings with intelligence or consciousness. Classical philosophers tried to describe the process of human thinking as the mechanical manipulation of symbols, sowing the seeds of modern AI.
Over time, humans have become increasingly reliant on automated technology. It can be found in almost every aspect of our lives, from automatic doors in retail to factory line robots to office business process automation.
Aside from automation, artificial intelligence is now a daily occurrence. The dreaded robot takeover appears to be getting closer!
And how did we arrive here?
Let's look at automation's history and the rise of robots and artificial intelligence.
Artificial Intelligence's Early History: Machines and Concepts
Indeed, stories about Atlantis, with its laser and advanced technology, seemed to enthrall even the most jaded non-believers. It means the concept of AI isn't as new as some might think. So, despite what some people assume, the true history of automation starts far closer to Aristotle and Socrates.
They have planted the seeds of automation and AI. The first-ever device to earn the name "computer" was a prototype Charles Babbage created in 1837 called 'The Analytical Engine.' Meanwhile, Ada Lovelace, Charles's friend, made the first-ever computer program that would have run on the machine.
However, he didn't realize that Babbage had passed away before his prototype was completed. During the industrial machinery and revolution between 1790 and 1840, then as now, humans feared the impact of automation on their jobs. (But happily, it turned out fine in the end!). As for artificial intelligence, well, it needs a computer.
Creation of Robots
The development of the concept of robots, automation, and AI was vastly ignored by most of the world. Still, certain groups from diverse backgrounds, professions, and cultures passionately studied the field.
AI from 1900-1950.
In 1921, Karel Kapek, a Czech writer, coined the term "robot" in his play Rossum's Universal Robots. His play was about factory-made people, and it projected the first instance of robots taking over the human world. We can say then that the popularity of artificial intelligence’s early history is thanks to Karel Kapek.
In 1927, the reel world didn't take long to follow suit. This year, the sci-fi movie Metropolis was released. It was the first on-screen portrayal of a robot, creating inspiration for robots, cyborgs, aliens, and many other non-human characters in future films.
1939 was a fantastic year for robot fans. Elektro was put on display at the World's Fair, a resounding point in the history of robotics and automation. With human commands, Elektro walks, blows up balloons, and even smokes cigarettes!
In 1948, William Grey Walter invented the first autonomous robots in the history of automation. These took the form of two "tortoises," Elmer and Elsie. Guided by the light and using a bump sensor, the robots were able to navigate around obstacles without the assistance of humans.
Surprisingly, the success of the interaction of these two sensory inputs (light and touch) that enabled these robots to function also assisted us in better understanding our nervous system.
Academic Recognition: The 50’s
Research on artificial intelligence escalated during the 1950s, leading to a series of significant advances in the field. 1952: Arthur Samuel created the first-ever game-playing program for checkers competing against human players.
And of course, one of the defining moments in the history of Ai is Alan Turing's paper on "Computing Machinery and Intelligence." Turing devised a method to measure a machine's intelligence, which tested the machine's ability to think. It's known as the "Turing Test" and, to this day, is an integral part of the field of AI.
1952: Arthur Samuel created the first-ever game-playing program for checkers that could compete against human players.
1955: Logic Theorist was written by Cliff Shaw, Allen Newell, and Herbert Simon. Their collaboration ultimately proved 38 of the 52 theorems in Principia Mathematica by Whitehead and Russell.
John McCarthy, an American computer scientist, and his team proposed holding an "artificial intelligence" workshop in 1955.
1956: Following the workshop, "artificial intelligence" was officially used.
AI Winter: The 60’s, 70’s, and 80’s
Among the notable robots of the 1960s was the 1966 therapy chatbot "ELIZA," which began as a joke. Then there was the mobile robot nicknamed "Shakey."
1966: This was when the project to build the first mobile robot, "Shakey," got underway. From 1966 to 1972, the project was in operation. The project was an approach to connecting various AI fields with navigation and computer vision. Shakey is available for viewing at the Computer History Museum.
1968: Terry Winograd, a computer science professor at Stanford University, created SHRDLU, an early natural language computer program. Following this period of innovation, the history of automation experienced a period of relative calm.
The 70s and 80s became known as the "AI winter." Waseda University in Japan developed the first anthropomorphic robot, WABOT-1, in 1970. It included features such as moveable limbs and observing and conversing.
Indeed, the 70s decade saw advancements in the fields of automatons and robots. At the same time, it was marked by several challenges, such as the government's reduction in support for AI research.
People became more pessimistic about AI's chances of success as interest in its development waned. This did not mean that AI research had been put on hold, but certainly, it was pushed to the sidelines.
However, there were some developments too. SCARA, the assembly line aid, was introduced in 1979. In 1984, there was RB5X, a robot that learned from experience. The term "virtual reality" was coined in 1985, with the first sales of VR gloves and glasses. "Augmented reality" by Tom Caudell would soon follow.
And, of course! 1977 was the year the iconic Star Wars legacy began. The film, directed by George Lucas, stars C-3PO, a humanoid robot invented to serve as a protocol droid and is knowledgeable in over seven million forms of communication and R2-D2. This small droid is unfit for human speech and interacts through electronic beeps.
In the '80s, Waseda University developed WABOT-2 in 1980, and this humanoid could interact with people, read musical scores, and play the organ. Around the end of the decade (1989), Sir Tim Berners-Lee invented hyperlinks and hypertext and used them to develop the World Wide Web. (We're glad he did, too.)
AI winter these decades might be, but they also gave us the best in technology. Sir Tim Berners-Lee created hypertext and hyperlinks to invent the World Wide Web. (We're forever happy and grateful he did!)
AI and Automation Software: The 90’s
The 1990s saw significant advances in artificial intelligence. AI came roaring back with a vengeance and an impact that cannot be denied. This time, computerized automation began to emerge, and the 1990s saw the beginning of AI development as it transitioned from physical bots to digital programs.
Deep Blue, an artificial intelligence, defeated chess grandmaster Garry Kasparov. The decade also saw the deployment of Sojourner, NASA's first autonomous robotics system.
In the early 2000s, there was a distinct gap in the history of automation, a second winter in the advancement of technology. Following Honda's creation of ASIMO (the "world's most advanced humanoid robot") in 2000, automation development slowed for a while.
Less than flattering reviews of business process management (an invaluable BPA tool) caused search rates for the term to drop by more than half between 2004 and 2011. The interest of businesses in automated management solutions has unexpectedly dropped.
In 2011, Apple's Siri ended a radio silence in terms of automation breakthroughs. Siri brought in a new era of automation and AI-powered assistants.
This embodied the transition from physical robots to the progression of computerized automation and AI software that started in the 1980s and into the 1990s.
Automation software is now considered a necessity rather than a luxury. Business process automation (BPA) and its sibling robotic automation technologies (RPA) are becoming more sophisticated and efficient. Its widespread use optimizes employee time and work, resulting in massive resource savings.
Yes! We encounter the wonders of AI daily, whether on social media, in emails, in gaming apps, or elsewhere. Artificial intelligence assistants can be found on our phones, transportation, homes, and offices. All of this during technology's youth!
It isn't perfect, to be sure. Siri doesn't always have the correct answer, and Alexa occasionally misunderstands us. Video game NPCs do crazy (and funny) things, and we saw the debacle that was Microsoft's Tay as recently as 2016.
Yet, despite these shortcomings, AI and automation are more versatile today than ever before. Technology is developing and improving rapidly. AI has dominated every aspect of our existence to the point where we may begin to take this technology for granted as a regular thing.
RELATED READ - Future Technology - What to Expect by 2050
The Future of AI and Automation
Despite a few stumbles along the way, the history of automation has seen a lot of success. It is still growing and evolving today, providing us with more innovative solutions, interactive AI, and helping unravel the mysteries of the universe.
It's impossible to say whether Karel Capek and all successive science fiction writers were correct about a future robot rebellion. What is evident is that the future appears to be automated. As exciting as present and future technology is, we must not forget artificial intelligence's early history of the effort required and given to get us to where we are today.
The Final Note
The road to a brilliant AI will remain challenging. While it is necessary to increase awareness of AI's limitations and collaborate to safeguard that AI will only be used for the common good, there is no doubt that AI has extraordinary potential to benefit the business world, society, and humanity.