Understanding AI AI Strategy Digital Transformation

Published:
Est. reading time: 6 minutes
Author: Steph Locke

As the powers and capabilities of Artificial Intelligence (AI) expand and evolve, the same cannot be said for the general understanding of the topic. This has resulted in AI becoming a blanket term that gets misused and thrown around for all things, including things that it’s not. People also have very unrealistic expectations of what AI can do leading in some cases to fear and paranoia over things like potential world domination, in others, disillusion when the AI doesn’t perform to the high standards they were hoping.

Improve operations

Learn from our business case studies

Businesses working smarter with us

Steph Locke

Technologist and consultant with a track record of delivering transformation of businesses into data science and AI companies.

More

As the powers and capabilities of Artificial Intelligence (AI) expand and evolve, the same cannot be said for the general understanding of the topic. This has resulted in AI becoming a blanket term that gets misused and thrown around for all things, including things that it’s not. People also have very unrealistic expectations of what AI can do leading in some cases to fear and paranoia over things like potential world domination, in others, disillusion when the AI doesn’t perform to the high standards they were hoping.

Clearing up the confusion

So how do we avoid the confusion that leads to negative hype and connotations for AI? A good start would be using more specific terms that help us understand what the AI does. For example, Machine Learning is a branch of AI where machines learn things on their own. That’s easier to grasp, right? The main concept here is that by showing a machine multiple examples of the same thing, it learns the patterns for itself rather than having to program each rule individually.

While there are some very clever applications of AI and machine learning, it is important to understand that the results come from analysing vast amounts of data, quickly. AI is only as good as the data you feed it. For a machine learning system to outperform humans at a task, like chess, it first has to be fed data on thousands of games. For an AI system to be on par with or outperform humans on medical diagnoses, it must first analyse thousands of images to identify patterns and rules. This is to say that AI systems may get there quicker, or be able to take into account a larger range of data, but it doesn’t necessarily make them smarter.

In one example, an AI approach has been developed that can identify cervical precancer with greater accuracy than humans. The algorithm was fed over 60,000 cervical images from a cervical cancer screening study in order to reach the diagnosis accuracy it has acquired. This wonderful application of AI is just one example among thousands of other systems that are all good at a very specific task, so they are known as Artificial Narrow Intelligence (ANI). In most cases, these systems are designed to speed up a task or pick up details using amounts of data that a human simply can’t process as quickly.

To quash the fear around AI, we need to take into account what AI is and isn’t yet capable of. Yes, we already have multiple ANIs, but we don’t yet, and are likely quite far from, creating an Artificial General Intelligence (AGI) that is capable of performing multiple tasks, or an Artificial Super Intelligence (ASI) which in extension is far more intelligent than humans, can make its own decisions and make changes to itself, a far more ominous prospect.

Rationalising the fear

Of course, this brings us on to the point, we might still be far from creating an ASI, but should we be worried about the future? Many experts believe this point is inevitable since human society is constantly advancing and it would take a catastrophic event to halt this, therefore at some point, we will create Artificial Intelligence that supersedes our own, bringing about unknown, potentially detrimental consequences. On the contrary, others believe that AI will always be limited by what humans want it to become, since human input is what empowers AI.

Additionally, as George Hosu puts so well in this article, “Human civilization doesn’t advance by breeding smarter and smarter humans, it advanced by building better and better tools.” His point is that many of the greatest minds in history would not have been able to come to the conclusions that they did if they had existed at an earlier point in time. Their discoveries were not due to inheriting smarter brains, but to the accumulation of tools over time. With a wider knowledge and tool base as a starting point for future discoveries, the fresh minds of the future can begin working on new problems, rather than making every discovery of the past for themselves before being able to work on anything new.

Therefore, our future discoveries are not limited by the bounds of our current knowledge, but by the reach of our tools, and AI is just another tool that allows human civilisation to advance in ways not previously possible. It is possible that AI will be an accelerator of human knowledge and act as an aid, rather than something that can surpass our own intelligence and abilities. A natural barrier to the evolution of society has always been resources, and they remain to be a more likely limiting factor as to what we will be able to achieve with AI.

Disillusion and the Hype Cycle

The final publicity problem we encounter comes from inflated expectations. For the portion of the population who are more comfortable with AI and in fact have high hopes for its implementation may fall prey to this phenomenon. If you take a look at this year’s Gartner Hype Cycle for AI, there is still a long way to go for almost every area of AI to reach their respective plateaus of productivity.

But it is far easier to overcome dissolution than it may seem, and it all feeds back into fully understanding what AI is capable of and setting realistic expectations when working with it. When it comes to tackling your first business problem using AI, it is important to start small and achievable in order to build a momentum of success. There are several examples where even the big tech companies have launched “moonshot” projects that have ended in disaster. These companies are well established and able to bounce back, however, small companies may lose faith and struggle to maintain their relationship with AI.

Final thoughts

Most of the hype around AI stems from misunderstanding and fear of the unknown, which can be amplified by confirmation bias whereby people are more likely to remember and believe the stories that confirm their previous beliefs. However advances in AI are not simply going to stop due to negative public perception, so as AI develops and becomes more ubiquitous, trust and understanding will become prevalent and the hype will die down.

Take a look at virtual personal assistants like Siri and Google Assistant. While these technologies may come with your device and you can choose whether or not to use them, people are beginning to rely on them and are choosing to have things like Amazon’s Alexa their homes, regardless of any negative associations.

On the industry side of things, as companies begin to embrace AI and more consistently find success, people will develop more realistic expectation of what can be achieved and the field will reach a plateau of productivity. This process will of course unfold at different rates for the various branches of AI, but progress is inevitable.

Improve operations

Learn from our business case studies

Businesses working smarter with us