AI is the digital distillation of a technological revolution that is facilitating the long-overdue evolution of the human mind. AI, as fear-inducing as anything disruptive and new is, can trigger new avenues of intelligence in human minds. These new avenues enable us to understand and attack society’s greatest challenges today.
AI can traditionally be divided into AGI and ANI. Theorists and AI experts call this Artificial General Intelligence (AGI) and Artificial Narrow Intelligence (ANI). AGI is designed to be capable of performing a wide variety of intellectual tasks, while ANI is designed to perform a single or a narrow set of related tasks.
AGI is designed to be flexible and adaptable, capable of handling new tasks and situations without human intervention. This is often referred to as ‘unsupervised learning’ which means that the AI system can learn from data without being explicitly programmed to do so. The difference between AGI and ANI lies in their scope of intelligence and their ability to generalise knowledge across different contexts. AGI is primarily driven by a variety of technical aspects that bear deeper discussion. One such aspect is the sophistication of AGI’s cognitive architecture — the development of a system that includes perception, attention, memory, language, and reasoning. AGI is envisioned as having the ability to perform any intellectual task that a human can do, and to apply knowledge learned in one context to new, unfamiliar situations. AGI is what you would consider the antagonist in pop-culture movies where computers ‘take over’ civilisation and enslave humans. The fear emanates from the very real possibility that an AGI system continues to learn and make decisions that even its creators cannot possibly predict. ANI, by contrast, is designed to perform a specific task and is not capable of generalising knowledge or skills to new situations outside of its programmed domain. While, AGI is still largely in the realm of theoretical research and development, ANI is already in widespread use in a variety of industries and applications.
ANI products like ChatGPT have existed for some time now but have recently taken the world by storm. Other revolutionary technologies such as Q–Chat, Dall E-2, Synthesia are also on the rise to promote art and academia through fun and adaptive chat experiences. ChatGPT is a chat bot, which allows users to engage in a conversation about a variety of topics to which it generates human-like responses in text form.
ChatGPT, and similar solutions, are particularly adept at automating routine and repetitive tasks, such as data entry and customer service, which could perhaps replace low-skill level workers. Many experts believe that AI will transform industries in significant ways, creating new opportunities for growth and innovation. In industries like healthcare, for example, AI can optimise transportation networks, develop new materials, and even simplify manufacturing processes.
AI can very well lead to the displacement of some jobs. Buzzfeed layoffs were almost at the same time during its new deal with OpenAI to leverage ChatGPT for its articles. However, we should not forget that disruptive tech also creates new jobs and skill sets. AI may create demand for workers with expertise in machine learning, data science and natural language processing. It may also create opportunities for workers to specialise in areas where human judgement and creativity will remain critical. The impact of AI on jobs and industries is likely to be uneven, with some industries experiencing greater disruption than others. But this can be said for nearly every disruptive technology that was introduced in legacy business sectors. The printing press and the telephone created vastly more opportunities in the long term. In the case of AI, workers in low-wage and low-skill occupations may be more vulnerable to job loss than those in high-wage and high-skill occupations.
Overall, while there is still much uncertainty about the impact of AI on jobs, it is evident that the technology is likely to have significant implications on the future of work.
Radhika Chhabra is a consultant and all views expressed are personal