Welcome to the Nightingale HQ overview of AI. Here we aim to introduce people to what they need to know.
Artificial intelligence is a technology that can ‘think’ in a human-like way. It is a diverse area of computer science that enlists tools such as machine learning and image processing to recreate and build upon the complex capabilities of the human brain. Like humans, AI systems can perceive their surroundings and learn from them, and use reasoning to solve problems.
You will have encountered several AI systems before reading this article today. These systems break down the thought processes that make up human intelligence and recreate them in bite-sized yet powerful chunks. As of yet, there is no artificial equivalent of the whole broad spectrum of human cognitive ability, but there are plenty of AI systems that can perform tasks we are capable of, more quickly than we could. Whenever you ask Alexa or Siri to look something up for you, you are engaging with an AI system that exhibits natural language processing (NLP), a capability far beyond the cognitive reaches of most animals. When you do a Google search, you are tapping into a deep learning program that figures out which of the billions of pages of content is most likely to suit your needs.
Consumers make use of AI every day
Encompassing all areas of human intelligence in a single system is the realm of Artificial General Intelligence (AGI) research. An AGI is a hypothetical machine that can learn like a human – that is, understand an entirely novel problem and figure out how to solve it – and. This problem could be any and – crucially – all of: having a conversation with a human, making a cup of coffee and obtaining a university degree by attending lectures and tutorials.
Sitting alongside Artificial General Intelligence is something much more familiar: Artificial Narrow Intelligence (ANI). This is the realm of Alexa, Google Maps, the Netflix recommendation engine and all the other AI systems we have easy access to today. Artificial Narrow Intelligence doesn’t have the capability to learn just any old task; it is restricted to a single task or family of tasks and draws its intelligence from a specific dataset.
Deciding which elements can be programmed as ANI systems, and how, has been the subject of research since the 1950s. Intelligence itself is a vast and complex concept. In everyday life we may refer to a person as ‘intelligent’ if they can recall interesting facts, or if they are very good at maths, but even speech and fine motor control are aspects of human intelligence. It may seem trivial for a human to learn how to walk and talk, but these learning processes are hard-wired into a brain that was shaped by millions of years of iterative tweaks.
Artificial intelligence is similarly diverse, and different AI systems address different areas of intelligence.
Some of the challenges tackled by AI include:
Most tasks require several intelligent capabilities to be employed Planning a trip, a task that humans and AI perform on a regular basis, involves reasoning, planning, accessing memories and research.
Many AI systems utilise several of these capabilities to achieve a given task. For example, many email providers have a spam filter, which use natural language processing to scan the contents of incoming emails and employ reasoning to combine this with metadata - such as the sender information – and past experiences (whether you have marked emails like this as spam in the past) to decide whether or not the message should be filtered.
AI utilises a suite of tools to perform these tasks intelligently. While early computer programs could only perform the specific instructions fed to them by a computer programmer, today’s AI systems get better over time without additional code. This important feature of AI is the ability for computers to learn, and underlies the biggest breakthroughs in AI to date. Machine learning is commonplace in familiar applications, whether it is deployed to give you recommendations (see Amazon, YouTube and Facebook advertising) or to make talking toys. Machine learning comes in several forms.
In supervised machine learning, the program is told how to classify data. For example, you might feed into your algorithm lots of photographs of dogs, and lots of photographs of cats. You would then ‘supervise’ the algorithm as it makes predictions from new data, letting it know if its prediction (in this case, ‘cat’ or ‘dog’) is correct. This training process improves the algorithm so it becomes very likely to correctly identify the animal from a previously unseen photo.
In unsupervised machine learning, the program is not told how to classify the data fed to it. The algorithm – equipped with a large suite of tools including image processing, natural language processing, logic and statistics – simply clusters and categorises the raw data. This is useful when you have an enormous quantity of unstructured data. Unsupervised machine learning algorithms can quickly identify patterns that a human would take decades to find.
Reinforcement learning is an action-based learning algorithm, in which the system attempts various actions in order to figure out which course of action is ‘best’ (based on some form of reward). This form of machine learning was deployed by AlphaGo, Google DeepMind’s computer that was the first to beat a Go champion. What makes AlphaGo remarkable is that its reinforcement learning processes can be applied to learning lots of different tasks.
AlphaGo learned to play Go by playing thousands of games and iteratively improving its ability to predict the best next move. Image credit 
Machine learning algorithms become far more powerful when they work together. When multiple algorithms are layered on top of each other and interact to interpret multiple high-level features of enormous datasets, the system is performing deep learning. Deep learning, supported by neural networks that are (as the name suggests) modelled on the neurons of the brain, can produce powerful solutions to complex problems by applying multiple problem-solving techniques at once.
The common thread behind supervised and unsupervised machine learning is data. Data is driving both AI and the need for AI, because so much data is readily available now that it takes the power of AI algorithms to sift through it and extract meaning.
Machine learning programs are exceedingly good at sifting through data and finding patterns. Our own brains cope well with finding patterns and correlations between two variables, because we are so accustomed to two-dimensional graphs. But can you imagine a ten-dimensional graph? That is far beyond what our three-dimensional brains can fathom, but for AI it is entirely manageable.
With Big Data comes big complexity, and the job of AI is to extract meaning from the noise. Image credit 
Many familiar AI systems were built on top of enormous data sets, from which they draw their intelligence. A simple example is the assistants built into smartphones - Siri and Google Assistant. If you ask one of these assistants what the weather will be like tomorrow, they are not figuring it out from first principles but looking up the information on the Internet. Machine learning algorithms have similarly been deployed to process vast volumes of medical data, to suggest appropriate (and extremely precise) medical diagnoses.
These systems are limited by the availability of data, so to use them in business to gain competitive advantage, you need a big dataset. Gaps in the data can have significant impact. Think of driverless cars; they are trained to avoid obstacles using a huge number of possible cases, for example, “You are approaching a zebra crossing and a mother and child are standing on the pavement. You should stop”. But the real world generates new anomalies every day, and a missing case could lead to (in the best case) undesirable processing time.
Much of human intelligence is based on data: our past experiences, successes and failures, and everything we have read or been told. We also utilise common sense and reasoning. AI systems are now beginning to emerge that exhibit some features of these areas of intelligence, in the hope that soon the gaps in our data will matter much less.
There’s no doubt that AI is a powerful tool, nor that it is becoming increasingly available. No longer just a tool for the tech giants of the world, it can be deployed in most businesses to bring a wealth of benefits.
Machine learning algorithms can be used with any set of data to improve insights and make valuable predictions. This can apply to customer or purchasing data to predict who is likely to buy your products and how you can target those markets. Applied to manufacturing data, it can be used improve your processes, speeding up production and reducing waste.
Gaining insight and predictions from your data empowers you to improve productivity, increase revenue and prevent lost opportunities
You may already be utilising process automation to save time in various areas of business, by handing over repetitive tasks to a machine. Deploying AI can enhance this by enabling the system to make decisions such as ‘how can I prioritise this workload most efficiently?’ as well as performing the tasks themselves. Employing AI can also broaden the scope of the tasks that can be automated. Chatbots (a common application of natural language processing) can free up time usually spent on customer service, without compromising on customer experience.
There are, of course, a multitude more benefits that AI can bring to your organisation that are beyond the scope of this article. For further reading, try these blog posts:
AI projects can take ten minutes or ten months, depending on what you want to achieve. Download our download our free 7 Quick Wins Projects guide to start with seven small AI projects that are quick to implement and will give you quick returns. The projects include:
Before you jump into a larger AI project, you should develop an AI strategy for your business. This will ensure that you are sponsoring AI projects that support your business goals and strategy, rather than spending money on AI for the sake of it.
When you are confident in your AI strategy, you can start developing AI to support your business goals.
 By Martin Grandjean - Grandjean, Martin (2014). “La connaissance est un réseau”. Les Cahiers du Numérique 10 (3): 37-54. DOI:10.3166/LCN.10.3.37-54., CC BY-SA 3.0
 By Axd - Own work, CC BY-SA 4.0