What is Artificial Intelligence? (A Layman’s Guide)

Want to understand the basics of AI? We've got you covered.

Last updated:

Artificial Intelligence is hardly a new concept, but with the development of some new tools, it has once again been thrust into the conversations we have about modern life, and how we can utilize AI in the right way – as well as how to avoid misusing it.

This is designed to be the ultimate layman’s guide to artificial intelligence, so strap in.

It covers everything from the history of AI, to the definition of some key terms you might’ve seen (machine learning, strong AI, etc.), the pros and cons of using artificial intelligence, and its applications both now and in the future.

What Is Artificial Intelligence (AI)?

If we’re going to put together the ultimate guide to AI, we need to start with a definition of the term “artificial intelligence”.

In simple terms, artificial intelligence is using computer science to create machines that are capable of replicating human intelligence – on some level. It doesn’t always mean the concept of completely replicating the ability of a human brain – rather, being able to mimic elements of human intelligence.

And why? Artificial intelligence (AI) is designed to help out with human society. To take the challenges that we face in everyday life and solve them much faster than a human being could, or at least free up our own time to focus on other things instead.

The ultimate goal is to create AI solutions that can recognize patterns and make decisions to make life easier for millions of human beings around the world – without creating AI programs that we need to fear.

The History of AI

Because AI seems to be something out of science fiction, a lot of people assume it’s a relatively new concept. If you like your movies, then you’ll know it’s not super-modern as an idea, with movies about self aware AI dating back to the 1960s.

Here’s a brief look at the history of artificial intelligence technology.

AI in the 1940s

  • Issac Asimov publishes his work ‘Three Laws of Robotics”, which are:
    • A robot may not injure a human being or, through inaction, allow a human being to come to harm
    • A robot must obey orders given to be by humans except where such orders would conflict with the First Law
    • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
  • These laws form the basis of much discourse around artificial intelligence in the future, including generating ideas for many movies and books around robots not following the First Law in particular.
  • Warren McCullough and Walter Pitts publish a paper entitled “A Logical Calculus of Ideas Immanent in Nervous Activity”. This work is recognized as creating the first mathematical model for creating artificial neural networks.
  • Donald Hebb’s book “The Organization of Behavior: A Neuropsychological Theory” looks at human intelligence and how deep neural networks are created and strengthened in the mind. This work is key for future AI models.

AI in the 1950s

  • Alan Turing’s groundbreaking paper “Computing Machinery and Intelligence” is published, introducing the concept of the Turing Test, a method to assess a machine’s intelligence.
  • Harvard undergraduates Marvin Minsky and Dean Edmonds achieve a significant milestone by constructing SNARC, the inaugural neural network computer.
  • Claude Shannon contributes to AI research by releasing the paper “Programming a Computer for Playing Chess.”
  • Arthur Samuel pioneers self-learning in AI by developing a program that can play checkers and improve its performance through experience.
  • The Georgetown-IBM machine translation experiment successfully translates 60 meticulously selected Russian sentences into English.
  • The term “artificial intelligence” is officially coined during the Dartmouth Summer Research Project on Artificial Intelligence, a landmark event led by John McCarthy that is widely acknowledged as AI’s birthplace.
  • Allen Newell and Herbert Simon make history by showcasing Logic Theorist (LT), the first reasoning program.
  • John McCarthy introduces Lisp, an AI programming language, along with the paper “Programs with Common Sense,” outlining the hypothetical Advice Taker—a comprehensive AI system capable of learning from experience like humans.
  • Allen Newell, Herbert Simon, and J.C. Shaw collaborate on the development of the General Problem Solver (GPS), an AI program designed to emulate human-like problem-solving.
  • Herbert Gelernter’s groundbreaking work leads to the creation of the Geometry Theorem Prover program.
  • Arthur Samuel coins the phrase “machine learning” during his tenure at IBM, laying the foundation for an essential AI subfield.
  • John McCarthy and Marvin Minsky establish the MIT Artificial Intelligence Project, further propelling AI research and innovation.

AI in the 1960s

  • John McCarthy establishes the AI Lab at Stanford.
  • The U.S. government’s Automatic Language Processing Advisory Committee (ALPAC) report highlights the limited progress in machine translation research, leading to the termination of all government-funded MT projects—a significant Cold War initiative aiming for instant translation of Russian.
  • Stanford achieves a breakthrough with the development of the first successful expert systems, DENDRAL and MYCIN.

AI in the 1970s

  • PROLOG – a logical programming language – is developed.
  • The British government issues the Lighthill Report, expressing dissatisfaction with AI research outcomes, resulting in significant reductions in funding for AI projects.
  • Frustration over slow progress in AI development leads to substantial DARPA cutbacks in academic grants between 1974 and 1980. Along with the earlier ALPAC report and the preceding Lighthill Report, these funding reductions cause a decline in AI research, known as the “First AI Winter.”

AI in the 1980s

  • Digital Equipment Corporation develops R1 (XCON), the first commercially successful expert system. R1’s ability to configure orders for new computer systems triggers an investment surge in expert systems, marking the end of the first AI Winter.
  • Japan’s Ministry of International Trade and Industry initiates the ambitious Fifth Generation Computer Systems project, aiming to develop supercomputer-like performance and an AI development platform.
  • In response to Japan’s FGCS, the U.S. government launches the Strategic Computing Initiative, providing DARPA-funded research in advanced computing and AI.
  • By 1985, companies are investing over a billion dollars annually in expert systems. The Lisp machine market emerges as a supporting industry, with companies like Symbolics and Lisp Machines Inc. creating specialized computers to run the AI programming language, Lisp.
  • However, as computing technology progresses, more cost-effective alternatives arise, leading to the collapse of the Lisp machine market in 1987, and subsequently ushering in the “Second AI Winter.” During this period (1987-1993), expert systems become prohibitively expensive to maintain and update, leading to a decline in their popularity.

AI in the 1990s

  • During the Gulf War in 1991, U.S. forces utilize DART, an automated logistics planning and scheduling tool, for efficient deployment.
  • Japan terminates the ambitious Fifth Generation Computer Systems (FGCS) project, as it fails to achieve the high goals set a decade earlier.
  • DARPA discontinues the Strategic Computing Initiative in 1993 after spending nearly $1 billion, falling considerably short of expectations.
  • IBM’s Deep Blue achieves a historic milestone by defeating world chess champion Gary Kasparov, showcasing the power of AI in strategic games.

AI in the 2000s

  • STANLEY, a self-driving car, secures victory in the DARPA Grand Challenge, showcasing significant advancements in autonomous vehicle technology.
  • The U.S. military starts investing in autonomous robots like Boston Dynamics’ “Big Dog” and iRobot’s “PackBot” in the same year (2005), demonstrating growing interest in AI-driven robotic applications for defense purposes.
  • In 2008, Google achieves breakthroughs in speech recognition technology and introduces the feature in its iPhone app, marking a significant milestone in the practical application of AI-powered voice recognition systems.

AI in the 2010s

  • IBM’s Watson achieves a remarkable victory by defeating its human competitors on the quiz show Jeopardy!, demonstrating significant advancements in natural language processing and AI reasoning capabilities.
  • Apple introduces Siri, an AI-powered virtual assistant integrated into its iOS operating system, revolutionizing the way users interact with their devices through voice commands and natural language understanding.
  • Andrew Ng’s Google Brain Deep Learning project employs deep learning algorithms to train a neural network using a massive dataset of 10 million YouTube videos. The neural network learns to recognize a cat without explicit instructions, leading to a breakthrough era for neural networks and substantial funding in deep learning research.
  • Google creates the first self-driving car to pass a state driving test, marking a significant milestone in autonomous vehicle technology and paving the way for further advancements in the field.
  • Amazon launches Alexa, a virtual home smart device, enabling users to interact with the AI-powered assistant for various tasks, such as setting reminders, playing music, and controlling smart home devices.
  • Google DeepMind’s AlphaGo achieves a momentous feat by defeating world champion Go player Lee Sedol, showcasing the power of AI in mastering complex games and tasks.
  • “Robot citizen” Sophia is created – a humanoid robot developed by Hanson Robotics, capable of facial recognition, verbal communication, and facial expression, blurring the lines between AI and human-like characteristics.
  • Google releases the natural language processing engine BERT, significantly reducing barriers in translation and understanding for various machine learning applications, and making significant progress in language understanding.
  • Waymo launches Waymo One, a service that allows users in the Phoenix metropolitan area to request rides from the company’s self-driving vehicles, marking a major step in the commercialization and implementation of autonomous driving technology.

AI in the 2020s

  • Baidu releases the LinearFold AI algorithm to scientific and medical teams working on the SARS-CoV-2 pandemic. The algorithm’s ability to predict the virus’s RNA sequence in just 27 seconds, 120 times faster than other methods, accelerates vaccine development efforts.
  • OpenAI introduces the natural language processing model GPT-3, capable of generating text that mimics human speech and writing, revolutionizing language generation tasks.
  • Building on GPT-3’s success, OpenAI develops DALL-E, an AI system that can create images from textual prompts, showcasing the potential of AI in creative content generation.
  • The National Institute of Standards and Technology releases the first draft of its AI Risk Management Framework, providing voluntary U.S. guidance to better manage AI-related risks to individuals, organizations, and society.
  • DeepMind unveils Gato, an AI system proficient in hundreds of tasks, from playing Atari games to captioning images and using a robotic arm for stacking blocks, highlighting AI’s versatility.
  • OpenAI launches ChatGPT, a chatbot powered by a large language model, amassing over 100 million users within a few months of its 2022 launch.
  • Microsoft releases an AI-powered version of Bing, its search engine, leveraging the technology that powers ChatGPT to enhance user search experiences.
  • Google announces Bard, a competing conversational AI, demonstrating the growing competition and innovation in conversational AI technology.
  • OpenAI further advances its language model capabilities in 2023 with the launch of GPT-4, its most sophisticated language model to date, signaling continuous progress and improvements in language processing technology.

Machine Learning vs. Deep Learning

A lot of people confuse the terms “machine learning” and “deep learning,” but they aren’t quite the same.

Machine learning is a type of AI that involves computers being trained to perform tasks commonly done by humans. They do this through the input of data to learn how to carry out their task. So there is some human intervention required, but the idea is that, with the addition of data, the machine doesn’t require human intelligence to carry out the task within a set of defined parameters.

Deep learning is a subset of machine learning that takes it one step further. It relies on convolutional neural networks to master much more complicated tasks that would normally require human intelligence. Deep learning techniques are modeled on the human brain, with data being passed throughout the artificial neural networks in the same way our brain fires data between neurons to learn.

Modern neural networks are very complicated and required a huge amount of computing power to run, but the result is impressive. The computer is able to ‘think’ more laterally with the data it is given, turning unstructured data into a solution or answer by using context. These are the AI algorithms with serious potential for the future.

Strong AI vs. Weak AI

Another set of terms thrown around when people are discussing “what is artificial intelligence” are ‘Strong AI’ and ‘Weak AI’.

Strong AI is sometimes referred to as artificial general intelligence (AGI), or Deep AI. It’s a complex AI system that is capable of ‘thinking’ – it uses processes similar to human thoughts to determine what a question means, the context of a question, and then applies all of its knowledge to respond to that question, learning from the outcome.

Weak AI is sometimes called narrow AI and is much less complex – it’s designed for specific tasks, and while it can employ machine learning to understand how to answer those tasks better in future, it won’t be able to ‘think outside the box’, as it were. It will respond to queries in a set of pre-determined ways.

Artificial general intelligence is what we think of when we consider the AI systems of the future. Those that can adapt to training data and use it in creative ways. Meanwhile, weak AI or narrow AI is what is powering much of the artificial intelligence we use in the modern world today.

The Four Types of AI

There are four main types of artificial intelligence (AI). Here’s a breakdown of them.

Reactive AI

As the name suggests, reactive AI is one of the more basic types of artificial intelligence, in that it can only react to the information fed into it. It isn’t capable of storing new memories and so it can’t create new data – it only works with the information it is programmed with.

This means that reactive AI can only be used for fairly limited uses. It can’t really ‘solve’ problems but it can react to certain types of information you put into it by generating an appropriate response, based on its training data.

The best examples of reactive AI were some of the first – such as the computer program Deep Blue that was designed to beat a world chess champion in a game of chess.

It wasn’t able to adapt to Gary Kasparov’s moves and think of new ways to win, but it was pre-programmed by AI researchers with a huge number of scenarios. So when Kasparov moved, this data was relayed to the machine, and it was able to select an appropriate move for itself based on the layout of the board.

There were no real ‘tactics’ – it couldn’t go on the offensive or decide to change style. It could just react with a move it had in its data banks.

Limited Memory AI

In the realm of artificial intelligence, Limited Memory AI stands as a significant advancement that builds upon the foundations laid by Reactive AI.

While Reactive AI systems operate based solely on pre-programmed rules and immediate data inputs, Limited Memory takes cognition a step further by incorporating memory capabilities.

Limited Memory artificial intelligence systems possess the ability to retain and recall past information, enabling them to learn from historical data and adapt their responses accordingly.

Unlike Reactive AI, which is limited to reacting solely to current inputs, Limited Memory artificial intelligence holds the capacity to draw on prior experiences, enabling it to make more informed decisions.

The introduction of memory components to AI opens up new horizons for applications. Limited Memory AI proves especially effective in domains that involve sequential decision-making or time-sensitive processes.

By analyzing past interactions, it can optimize responses and strategies, making it more suitable for real-world scenarios where contextual understanding plays a crucial role.

While Reactive AI serves its purpose in tasks with fixed rules and straightforward outcomes, Limited Memory AI tackles challenges that require more nuanced thinking and adaptation.

This advancement propels AI technology toward greater autonomy and self-improvement, bringing us closer to the realm of human-like cognitive abilities.

Examples of Limited Memory AI include chatbots, self driving cars, fraud detection systems and personal assistants such as Siri and Google Assistant.

Theory of Mind AI

Moving beyond the capabilities of Reactive AI and Limited Memory AI, Theory of Mind AI represents a significant leap in artificial intelligence.

At its core, Theory of Mind AI aims to understand and predict human emotions, beliefs, intentions, and mental states. This form of AI simulates the theory of mind concept observed in humans, where individuals infer what others might be thinking or feeling to navigate social interactions effectively.

Theory of Mind AI systems incorporate models of human cognition to infer the mental states of humans or other AI agents they interact with. In other words, it tries to mimic the human mind and use human language in its responses, including all of the nuances of the ways we communicate.

By attributing beliefs, desires, and emotions to others, Theory of Mind AI can anticipate their intentions and tailor responses accordingly. This understanding of human-like mental processes allows AI to engage in more natural, empathetic, and contextually appropriate interactions with users and other agents.

Self-Awareness

Self-Awareness AI represents the cutting-edge frontier of artificial intelligence, surpassing Reactive AI, Limited Memory AI, and Theory of Mind AI.

This advanced form of AI aims to develop machines that possess self-awareness, similar to the consciousness observed in humans. Self-aware AI systems can perceive their own internal states, recognize their existence as distinct entities, and understand their interactions with the external environment.

The concept of self-aware AI raises fundamental questions about the nature of consciousness and the boundaries of machine intelligence.

Unlike previous AI models that operate based on pre-programmed rules or learned patterns, self-aware AI has the potential to possess a subjective sense of self and consciousness, raising philosophical and ethical considerations.

While we are still in the early stages of exploring self-aware AI, its potential applications could be vast. Self-aware AI computer programs might exhibit higher levels of autonomy, adaptability, and problem-solving capabilities, potentially leading to innovations in fields like robotics, medicine, and personal assistance.

It is important to note that the realization of fully self-aware AI remains theoretical and speculative at this point. Creating machines with genuine self-awareness is a complex challenge beyond our computer science capabilities at this point.

It would involve not only technical advancements but also deep philosophical and ethical discussions.

As we continue to advance AI technology, the concept of self-awareness AI opens up exciting possibilities and challenges, pushing the boundaries of our understanding of intelligence and consciousness.

What are the Advantages and Disadvantages of Artificial Intelligence?

There are some huge potential gains for society with artificial intelligence, but also some potential drawbacks. Let’s break those down.

Artificial Intelligence Pros

Elimination of Human Error

AI’s ability to perform tasks with precision and consistency reduces the risk of human errors.

In critical fields like healthcare, finance, and manufacturing, where even minor mistakes can have severe consequences, AI offers enhanced accuracy and reliability.

By automating complex processes and decision-making, AI minimizes the likelihood of errors, leading to improved overall outcomes and increased safety.

Available 24/7

AI-driven systems operate tirelessly, providing round-the-clock availability and responsiveness.

Virtual assistants, customer support chatbots, and automated services can cater to user needs at any time, ensuring efficient support and immediate responses to queries.

This constant availability enhances user satisfaction and streamlines business operations, allowing organizations to serve global audiences across different time zones.

You’re able to turn your 9-5 business into one that can help customers whatever the hour, without hiring international staff – just let your artificial intelligence chatbot handle any queries while your team are sleeping.

Lack of Bias

AI algorithms, when designed and trained properly, have the potential to be impartial and unbiased in decision-making.

Unlike human decision-makers who may be influenced by personal beliefs or prejudices, AI systems follow predefined rules and data-driven insights.

This objectivity makes AI a valuable tool in areas like recruitment, lending, and law, where eliminating bias is crucial for fairness and inclusivity.

Menial Tasks

AI excels at automating repetitive and mundane tasks that are time-consuming for humans.

By delegating these tasks to AI, human workers can focus on more creative and strategic endeavors – and they don’t have to endure the same old boring jobs that just have to be done.

This not only improves job satisfaction but also enhances productivity and efficiency in various industries, contributing to overall economic growth. People are happier, businesses are more successful – it’s a win-win.

Cost Reduction

AI’s automation capabilities lead to significant cost savings for businesses.

By replacing manual labor with AI-driven processes, companies can reduce labor costs and operational expenses.

Not only that, but AI’s predictive analytics helps optimize resource allocation and minimize waste, resulting in efficient resource management and increased profitability. A lot of cruise ships are using AI now to predict food use and minimize food waste, which is better for the cruise line but also for the planet.

Ability to Tackle Complex Data

One of the most significant advantages of artificial intelligence (AI) is its capability to process and analyze vast amounts of complex data.

Traditional data analysis methods may struggle to handle the sheer volume and intricacy of big data, but AI-powered systems excel in this realm.

There’s some real potential for the computer science of AI to allow us to make some pretty major jumps with society and development, way beyond what is capable when we rely on humans only.

Artificial Intelligence Cons

Cost to Implement

Developing and implementing AI technologies can be expensive, especially for smaller businesses and organizations.

The initial investment in hardware, software, and skilled AI experts can pose financial challenges.

Additionally, ongoing maintenance, data management, and updates require further investment, making AI adoption cost-prohibitive for some.

No Creativity

While AI excels at repetitive and data-driven tasks, it lacks human creativity and intuition.

Current AI systems operate based on predefined rules and patterns, limiting their ability to think creatively, adapt to unforeseen situations, or generate innovative ideas.

Indeed, there is some real concern about how generative AI tools could be used to replace creative work – but with the acting strikes of 2023, there is also significant pushback.

Current Systems Can’t Gain Experience

Unlike humans, AI systems lack the ability to learn from direct experiences.

While machine learning algorithms can improve based on data input, they don’t have real-world experiences to draw from, limiting their ability to comprehend context, emotions, and subtle nuances.

Until we get AI that is self-aware, we’re always going to need human intervention and more training data if we want to improve the tools we have today.

Impact on Employment

The widespread adoption of AI and automation may lead to job displacement in various industries.

AI’s ability to perform tasks more efficiently and at lower costs could result in the elimination of certain job roles, leading to potential unemployment or the need for significant workforce retraining.

It may be that humans need to retrain themselves, or ultimately AI may lead to use re-thinking how society works and whether there are alternatives to employment. Because while there will always be jobs available that can’t work with AI, there won’t be enough to sustain the growing global population.

Ethical Issues

AI raises complex ethical concerns, particularly in sensitive domains like healthcare, finance, and criminal justice.

Issues surrounding data privacy, bias, and decision-making transparency have become significant challenges that require careful regulation and ethical considerations. More on that later.

Artificial Intelligence Examples – How It’s Used Today

Here are some examples of AI applications from the modern world:

Virtual Assistants

Virtual assistants, such as Apple’s Siri, Amazon’s Alexa, and Google Assistant, use AI and natural language processing to understand spoken commands and respond with relevant information or perform tasks.

These AI-powered assistants can set reminders, answer questions, provide weather updates, control smart home devices, and even engage in casual conversation, making them valuable tools for everyday tasks and convenience.

Recommendation Systems

AI-driven recommendation systems analyze user behavior, purchase history, and preferences to offer personalized suggestions.

Streaming platforms like Netflix use AI to recommend movies and TV shows based on a user’s viewing history, while e-commerce sites like Amazon suggest products based on past purchases and browsing habits, enhancing user experience and encouraging engagement.

Natural Language Processing (NLP)

Natural Language Processing (NLP) enables machines to understand, interpret, and respond to human language.

NLP is used in chatbots to provide customer support, in sentiment analysis to gauge public opinion on social media, and in language translation services like Google Translate to break down language barriers, making cross-language communication more accessible.

Autonomous Vehicles

Self-driving cars leverage AI, computer vision, and sensor technologies to navigate roads autonomously.

AI algorithms analyze real-time data from cameras, LIDAR, and radar to detect pedestrians, other vehicles, and road signs, allowing the car to make decisions on steering, accelerating, and braking to ensure safe and efficient travel.

Healthcare Diagnosis

Artificial intelligence is applied in medical imaging analysis to assist healthcare professionals in diagnosing diseases.

AI algorithms can analyze X-rays, MRIs, and CT scans, helping radiologists detect abnormalities and providing more accurate and timely diagnoses, improving patient outcomes.

Fraud Detection

In the financial industry, AI-powered fraud detection systems monitor transactions and analyze behavioral patterns to identify potential fraudulent activities.

These systems can quickly flag suspicious transactions, preventing financial losses and protecting customers from fraudulent activities.

You know those times when your bank suspects you’ve had your card stolen? That’s not because there are human experts watching our every move – it’s advanced AI predicting what should and shouldn’t be normal spending.

Manufacturing and Robotics

AI-driven robots are used in manufacturing to perform repetitive and precise tasks, such as assembly line operations and quality control.

These robots can work with high accuracy and speed, increasing productivity and reducing errors in manufacturing processes.

Language Translation

AI-based translation services like Google Translate use machine learning models to provide real-time language translation.

These services can translate text, speech, and even visual content, breaking down language barriers and facilitating communication between people speaking different languages. We see it in modern smartphones too, where you can just hold up your phone to a sign or menu and the translation is done in real time.

Personalized Marketing

AI-powered marketing platforms analyze vast amounts of customer data to create personalized marketing campaigns.

By understanding individual preferences and behaviors, AI can tailor advertisements and content to target specific audiences, increasing the effectiveness of marketing efforts.

Financial Analysis and Trading

AI algorithms analyze financial data, market trends, and historical patterns to make predictions about stock prices and investment opportunities.

AI-driven trading platforms use machine learning to execute trades and manage portfolios more efficiently, optimizing investment strategies. Of course, even when using a machine learning model, there is still a risk – the financial world is not completely predictable.

Social Media Content Moderation

AI algorithms are used on social media platforms to detect and filter inappropriate or harmful content, such as hate speech, harassment, and graphic images.

Content moderation AI helps maintain a safer online environment and enforce platform community guidelines.

Drug Discovery

AI is utilized in drug research and development to analyze large datasets and identify potential drug candidates.

AI-driven models can predict the efficacy of drugs and help researchers narrow down options for further study, potentially speeding up the drug discovery process and reducing costs.

Weather Prediction

AI models analyze meteorological data, historical weather patterns, and satellite imagery to make more accurate and timely weather forecasts.

These models help meteorologists and weather agencies predict severe weather events and provide valuable information for public safety and planning. These days you can trust the weather reports much more thanks to the advanced AI models used.

Virtual Reality and Gaming

AI is used in video game development to create intelligent non-player characters (NPCs).

AI-controlled NPCs can adapt to players’ actions, making their behavior more realistic and challenging, enhancing the gaming experience.

Customer Service

AI-powered chatbots are employed in customer service to handle inquiries and support requests.

Chatbots can provide quick responses to frequently asked questions and assist customers with troubleshooting, reducing wait times and improving customer satisfaction.

Generative AI Tools

We’re only now starting to see how generative AI can be used, but it is quite impressive. Tools like ChatGPT and Bard can be utilized to generate the written word, although not to the same quality as a human just yet. And there are tools that can generate AI images, websites, and more.

They don’t have the creativity or nuance of the human mind, but these AI technologies are improving all the time.

Augmented Intelligence vs. Artificial Intelligence

It’s also worth noting that there is a secondary type of artificial intelligence called augmented intelligence (AU).

Whereas artificial intelligence is designed to work alone without the need for humans, augmented intelligence is where humans are still involved but have more information available to them, so they can make decisions in an easier way.

While artificial intelligence is all about machine learning or deep learning to take care of tasks on our behalf, augmented intelligence is used where human intervention is still either required or preferred.

So a self-driving car is an example of artificial intelligence, whereas a car with modern advancements, such as navigation systems, would be a (basic) example of augmented intelligence.

The Ethics of Artificial Intelligence

There are multiple potential ethical issues with artificial intelligence (AI). It’s important that we bear these in mind as we enter the next stages of AI development.

Ethics of Machines with Intelligence

Is it fair to create sentient machines? That is one of the biggest questions of ethics when it comes to AI.

Artificial general intelligence that is capable of thinking, and types of deep learning AI, will eventually lead to self-aware artificial intelligence that understands its existence. It will understand that it is a machine relying on an artificial neural network and that it is used for the benefit of humans without its own reward.

How will machines take to that? Is it fair for us to create convolutional neural networks that work for our own benefit only? Is there indeed any risk of a machine uprising, as we see in so many Hollywood science fiction movies?

There’s no easy answer to this, and it’s something that will absolutely remain at the forefront of the AI debate as technology develops.

Ethics of Mis-use of AI

There is huge potential for people to take complex AI systems and misuse them for their own personal gain. We need to remain vigilant in understanding how our private data is used for AI, and who has access to it.

For example – we now have the technology for machine learning systems to analyze pictures of crowds of people and pick out the faces of known criminals with a record, so that they can be apprehended quicker. But for that to work, it means more surveillance of people who haven’t committed a crime.

The criminals can only be caught by this technology when they are being watched, which means everyone is being watched.

Is this ethical? What sacrifices do we have to make, and where is the balance between where AI is beneficial to us, and where it starts to infringe on our freedoms?

It’s a tricky question to answer.

Ethics of AI Replacing Work

As we’ve already touched on, there is definitely potential for artificial intelligence to replace much of the work we do. Now, you can use speech recognition along with deep neural networks to begin to respond to customer queries with contextual answers, but is it right to replace real customer service workers?

Or how about the creative arts? We know that generative AI can be used to put actors into scenes, but should it? Should we allow generative AI to take away the earnings of someone who could otherwise have been employed for a day?

Striking the balance between artificial intelligence helping society, and these neural networks being employed for the profits of the fat cats only without consideration for the working people is another of the big ethical concerns and it’s something that must be considered.

AI Governance

The power of AI is clear. We know that AI tools are going to become more of a part of our everyday lives, and yet we also know that there are concerns about the potential of the technology to accelerate quickly without the proper checks and balances in place.

So, who should regulate AI? What do we do to ensure that AI applications are reserved for only the right uses and that we don’t end up with AI solutions that could cause us serious problems in the future?

Many of the biggest names involved in tech have called for clearer and stronger regulation laws. It we don’t get a handle on AI regulation now, then we could end up in serious trouble, as the growth of the technology outpaces our ability to monitor and control it.

The risk of something like a Terminator situation, where the machines wipe out humanity, is almost non-existent. But the risks of companies or those with a particular political agenda using AI to control parts of society, or to act in other unethical ways, is not just potential – that risk is real now.

Machine learning models can do very exciting things but also very dangerous things. This is why countries are looking to tighten up their laws around AI regulation now. Expect to see more on this in the coming years.

The Potential Future of AI

So, where do we see AI now? We know it’s going in only one direction – it will be more prolific, and uses will be explored further as we push the boundaries of what is possible with artificial intelligence in our lives.

We’ll see more AI tools developed, using new data methods to learn better. This will include the typical uses of structured data that we already see in reaction AI, but also advances in the use of unlabeled data as part of generative AI applications.

Deep learning will get more advanced, and we may see the first real Theory of Mind AI computer systems developed in the next 10-20 years.

And our current uses of AI will get better. Computer vision will improve our self driving cars as it is developed further. Large language models will make it much easier to translate huge texts at once, opening up international resources to everyone via the internet.

Speech recognition will get better – no more calling up a phone line and getting frustrated when your accent makes it impossible for the system to understand what you’re saying.

And we’ll see some truly life-changing applications of AI – hopefully for the better. In particular, in financial and healthcare fields, deep learning recurrent neural networks will be able to adapt to complex scenarios on the fly, without human error.

Will our lives get much easier in the next decade thanks to artificial intelligence? Probably not that quick, but we’ll still see small improvements all the time. Life in 2040 will look very different to life in 2020 and artificial intelligence will have played a huge part in that.

Summary

That’s your ultimate layman’s guide to artificial intelligence and machine learning. It’s worth reading through it before you throw yourself into the world of generative systems such as ChatGPT and Bard, because it will help you to understand the ethics and limitations of the tools you’re using – and their potential in future.

Keep this guide bookmarked for future too – use it as your dictionary definition for terms such as deep learning and neural network.

And next why not check out our guides to the best AI tools for writing, or our review of ChatGPT?