Evolution of Artificial Intelligence.
The first was in the 1950s and the 1960s, when AI developed rapidly as a rule-based system that used a set of predetermined “rules of choices” to make decisions and solve problems. Progress slowed in the 1970s due to a lack of computational power and scalability, the first “AI winter”. There was a brief thaw in the 1980s, when expert systems mimicking the human decisionmaking process became popular. However, as these systems showed the same limitations as earlier systems, interest and funding in AI diminished once again. For example, machine learning emerged as a subset of AI that use statistical techniques to detect patterns and make predictions based on the data. Big data and the rise of deep learning further propelled significant advancements. The second wave started in the 1990s, based on statistical learning. By analysing large quantities of data, machines could revise rules and provide more flexibility. The resurgence in AI research and application was driven by three major forces, namely, increasing computational power at low cost, unprecedented data volumes and more sophisticated and efficient algorithms. One landmark was the launch in 2007 of ImageNet, a large-scale system for image recognition based on millions of human-annotated images. A second was the creation of the digital assistant Siri in 2011. A third, in 2016, was the defeat of the world Go champion by a computer programme.
Nevertheless, at this stage, AI was largely confined to specific tasks within limited domains and did not possess human-like intelligence. This is considered narrow artificial intelligence, or weak AI. The third and current wave gathered momentum in the 2020s, with the use of significant computer power for systems not only based on rules but seeking contextual adaptation or factoring in contexts and explaining decisions. Recent years have seen the emergence of GenAI, driven by advances in natural language processing and large language models, along with exponential growth in computational power and data. This differs from discriminative or predictive AI, which typically analyses and classifies data for particular outcomes such as pattern recognition. GenAI instead mostly identifies relationships in large amounts of data and uses these to create new content. However, this is at the cost of explainability, as it may be difficult to understand the decision-making logic behind a model’s results because it is probabilistic, and the same conditions or inputs might subsequently produce different outputs. GenAI is trained on huge data sets and uses complex algorithms to generate statistically probable outputs, as well as new content that resembles existing data, whether in the form of texts, images or videos. Public interest in AI was fuelled by the launch of the online application ChatGPT in 2022 by OpenAI. Other examples are DALL-E, which creates images from text, and Sora, which has been conceived for video creation. The growing capabilities and adaptability of AI represent a paradigm shift that is transforming it into a generalpurpose technology configurable for different uses.
Between 2024 and 2030, the GenAI market is predicted to grow from $137 billion to $900 billion, a compound annual growth rate of 37 per cent. Expectations are high, comparable to the enthusiasm in the late 1990s that boosted investment during the initial diffusion of the Internet. Nevertheless, there are still high levels of uncertainty. Evidence of the impact of Gen AI applications and how they could be best utilized remains limited, particularly in developing countries, and further research and observation is required. Moreover, AI applications are valuable but not infallible. If the training data are incomplete or biased, the model may learn incorrect patterns, make inaccurate predictions or hallucinate to offer information that is not present in the training data or that contradicts a user’s prompt. The rapid development of GenAI has reignited the expectation of developing artificial general intelligence or ”strong AI” that could even surpass human intelligence and operate autonomously. AI has already outperformed humans in handwriting, speech and image recognition, as well as in reading comprehension and language understanding (figure I.9).
However, human intelligence is complex and multifaceted; it may be more challenging than expected to achieve artificial general intelligence. The driving forces behind the rapid progress of AI in recent decades involve three key leverage points, that can trigger transformational cascades for AI, namely, infrastructure, data and skills; infrastructure in the form of increasing computational power and cost-effective information transfers; data, with regard to the massive and diverse amounts of quality data produced at accelerating speeds; and skills in the form of advanced expertise in developing and applying sophisticated AI models.
Comments
Post a Comment