In 1956, John McCarthy, the father of Artificial Intelligence (AI), brought together expert thinkers from multiple disciplines to explore how machines could “mimic” certain human traits. These expert thinkers came from the fields of Computer Science, Engineering, Logic, Mathematics, and Psychology and wanted to find out how machines could:
- Use language
- Form abstractions and concepts
- Improve problems reserved for humans
- Improve themselves
Today, the field of AI also draws from the fields of Linguistics, Philosophy, Statistics, Economics, and others. Due to the advancements and inclusion of various fields, the definition of what AI is has also evolved. What was once considered AI, is now considered just one of many things a computer system does. In my view, AI is a capability and thus a computer system that can independently solve routine and non-routine problems through self-learning has AI capabilities. These capabilities of a computer system can range from Object Character Recognition (OCR), Natural Language Processing (NLP), Computer Vision, Motion Manipulation (in Robotics) and others.
Under the hood, AI-capable computer systems are a combination of algorithms, data, hardware, and software. When writing algorithms and eventually code for AI, software developers cannot really take into account all the various scenarios a computer system might encounter and what to do in those scenarios. Thus, AI-capable computer systems are coded in a way where they can learn from experience through training by using baseline datasets and then extrapolating them to other scenarios.
However, the problem with creating AI-capable computer systems is that these systems are still highly dependent on the quality of the underlying algorithms and the datasets, both of which can be created/provided by humans. As humans, we are prone to biases in not only creating algorithms but also incomplete data that can create AI-capable computer systems that are biased and would be making incorrect decisions.
For organizations that are looking to improve themselves, AI-capable computer systems can be used to help enhance customer experiences, improve operations and provide insights for making decisions. On the flip side, AI-capable computer systems that have weak algorithms and/or bad data can result in horrible decision-making. Now that we understand what is AI and how it can potentially be used, let’s ask the following questions:
In the Future
Who is creating the underlying algorithms and cleaning the data?
|Who should be creating the underlying algorithms and cleaning the data?|
|What happens when AI-capable computer systems make bad decisions?||What should happen when AI-capable computer systems make bad decisions?|
|Where AI-capable computer systems are relevant for decision-making?||Where should AI-capable computer systems be relevant for decision-making?|
|When is data being acquired?||When should data be acquired?|
|Why AI-capable computer systems are being used?||
Why AI-capable computer systems should be used?
As we can see, the human factor in AI-capable computer systems is a real threat/opportunity. And while we are far away from creating sentient beings that are capable of general intelligence, right now we do have AI-capable computer systems that can perform narrower tasks better than humans. What this means is that today and in the near future, specific tasks would be given to these AI-capable computer systems rather than humans. Keeping this in mind, organizations and governments are trying to figure out how to address this AI wave and put programs in place when certain jobs would go extinct.