Site icon Secplicity – Security Simplified

Artificial Intelligence Part 1: From Hype to Hero

This is the first blog in our three-part series on Artificial Intelligence.

Artificial intelligence (AI) is broadly defined as the process of developing computer systems to adapt to changing circumstances and perform tasks that normally require human intelligence. While many consider AI to be a simple buzzword, the concept of artificial intelligence (AI) has been around since at least the 1950s. Pioneering computer scientists like Alan Turing posited that sometime in the future, computers would be able to emulate the work of humans and perform “intelligent” tasks, like play chess. Over the last 60 years, the hype and hope around AI has come in waves, as advances in computing technology made analyzing huge data sets possible and opened doors to new applications.

In the last two decades we have seen AI take major strides in capability. We can point to IBM’s Deep Blue narrowly defeating world chess champion Gary Kasparov in 1997, and their Watson AI defeating Jeopardy champions Brad Rutter and Ken Jennings in 2011 as evidence that artificial intelligence had become mainstream.

Today, we rely on elements of artificial intelligence in many facets of our everyday lives:

Artificial intelligence is now an intrinsic part of our lives, and adoption of the technology promises to accelerate rapidly in the coming years. In fact, a recent report by PWC projects that the total economic impact of AI will reach $15.7 trillion by 2030.

Yet, for many, the adoption and growth of AI presents a major concern, with skeptics pointing to everything from potential loss of jobs via automation, and fears about the ability of computers to perform the complex tasks, such as driving, for which they are being designed.

Stay tuned next week when we explore the potential pitfalls of the rise of AI, including how cyber criminals are ramping up their own use of artificial intelligence.

References

Exit mobile version