Evolution of Artificial Intelligence.

 


To help in understanding the promises and perils of AI, the following sections discuss different waves of AI and the intersection of AI with other technologies. There is no universal definition of AI, but it is generally considered to be the capability of a machine to engage in cognitive activities similar to those performed by the human brain, such as reasoning, learning and problem-solving. The notion originated in the 1940s as part of the concept of machine intelligence by Alan Turing, who suggested that machines could simulate both mathematical deduction and formal reasoning. The term artificial intelligence was coined in 1956 for the Dartmouth Summer Research Project on Artificial Intelligence. Since then, progress has been uneven and can be considered to have taken place in three waves (figure I.8). 

Graphics processing units (GPUs) were initially designed for computer graphics and image processing but later became useful in non-graphic calculations and have been widely used in training AI models. GPU performance is expressed in terms of floating-point operations (flops) per second per dollar, adjusted for inflation. The curve represents the best fitted line based on data from 2000 to 2020 and extrapolated figures between 2020 and 2025 (Hobbhahn and Besiroglu, 2022). For the amount of data generated, figures for before 2010 are extrapolated based on the estimates from 2010 to 2025 (Taylor, 2023).



The first was in the 1950s and the 1960s, when AI developed rapidly as a rule-based system that used a set of predetermined “rules of choices” to make decisions and solve problems. Progress slowed in the 1970s due to a lack of computational power and scalability, the first “AI winter”. There was a brief thaw in the 1980s, when expert systems mimicking the human decisionmaking process became popular. However, as these systems showed the same limitations as earlier systems, interest and funding in AI diminished once again. For example, machine learning emerged as a subset of AI that use statistical techniques to detect patterns and make predictions based on the data. Big data and the rise of deep learning further propelled significant advancements. The second wave started in the 1990s, based on statistical learning. By analysing large quantities of data, machines could revise rules and provide more flexibility. The resurgence in AI research and application was driven by three major forces, namely, increasing computational power at low cost, unprecedented data volumes and more sophisticated and efficient algorithms. One landmark was the launch in 2007 of ImageNet, a large-scale system for image recognition based on millions of human-annotated images. A second was the creation of the digital assistant Siri in 2011. A third, in 2016, was the defeat of the world Go champion by a computer programme. 

Nevertheless, at this stage, AI was largely confined to specific tasks within limited domains and did not possess human-like intelligence. This is considered narrow artificial intelligence, or weak AI. The third and current wave gathered momentum in the 2020s, with the use of significant computer power for systems not only based on rules but seeking contextual adaptation or factoring in contexts and explaining decisions. Recent years have seen the emergence of GenAI, driven by advances in natural language processing and large language models, along with exponential growth in computational power and data. This differs from discriminative or predictive AI, which typically analyses and classifies data for particular outcomes such as pattern recognition. GenAI instead mostly identifies relationships in large amounts of data and uses these to create new content. However, this is at the cost of explainability, as it may be difficult to understand the decision-making logic behind a model’s results because it is probabilistic, and the same conditions or inputs might subsequently produce different outputs. GenAI is trained on huge data sets and uses complex algorithms to generate statistically probable outputs, as well as new content that resembles existing data, whether in the form of texts, images or videos. Public interest in AI was fuelled by the launch of the online application ChatGPT in 2022 by OpenAI. Other examples are DALL-E, which creates images from text, and Sora, which has been conceived for video creation. The growing capabilities and adaptability of AI represent a paradigm shift that is transforming it into a generalpurpose technology configurable for different uses.



 Between 2024 and 2030, the GenAI market is predicted to grow from $137 billion to $900 billion, a compound annual growth rate of 37 per cent. Expectations are high, comparable to the enthusiasm in the late 1990s that boosted investment during the initial diffusion of the Internet. Nevertheless, there are still high levels of uncertainty. Evidence of the impact of Gen AI applications and how they could be best utilized remains limited, particularly in developing countries, and further research and observation is required. Moreover, AI applications are valuable but not infallible. If the training data are incomplete or biased, the model may learn incorrect patterns, make inaccurate predictions or hallucinate to offer information that is not present in the training data or that contradicts a user’s prompt. The rapid development of GenAI has reignited the expectation of developing artificial general intelligence or ”strong AI” that could even surpass human intelligence and operate autonomously. AI has already outperformed humans in handwriting, speech and image recognition, as well as in reading comprehension and language understanding (figure I.9). 


However, human intelligence is complex and multifaceted; it may be more challenging than expected to achieve artificial general intelligence. The driving forces behind the rapid progress of AI in recent decades involve three key leverage points, that can trigger transformational cascades for AI, namely, infrastructure, data and skills; infrastructure in the form of increasing computational power and cost-effective information transfers; data, with regard to the massive and diverse amounts of quality data produced at accelerating speeds; and skills in the form of advanced expertise in developing and applying sophisticated AI models.




InfrastructureInfrastructure requirements go beyond the basic provision of electricity and the Internet. They also comprise computing power and server capabilities, such as significant storage, network connectivity, security and backup systems. These are needed to process huge amounts of data, run algorithms, execute models and transmit results worldwide. 

Data – Data are the primary input for the training, validation and testing of algorithms, thereby enabling AI models to classify inputs, generate outputs and make predictions. Data are therefore a critical socioeconomic asset in decision-making processes. High-quality, diverse and unbiased data are essential in building effective and trustworthy AI systems. Data and AI systems interact dynamically, whereby more data provide more training for an AI model, making it more popular and thus capable of collecting (and generating) more data. This dynamic and scale effects could widen existing data-related and technological gaps, creating higher entry barriers for latecomers.  


Skills – Skills range from basic data literacy to the use or development of appropriate techniques, algorithms and models, and from proficiency in data analysis to a combination of technical expertise and domain knowledge. Such skills empower the workforce to use AI to solve complex problems and increase productivity

These three leverage points create synergistic, positive feedback loops. More affordable and powerful computational resources enable the processing of vast and complex data sets, allowing sophisticated algorithms to analyse and learn from data more effectively, which in turn accelerates the adoption and development of AI, thereby generating more data. The abundance of diverse data provides a rich foundation for training AI models, enhancing their ability to generalize and perform well in different scenarios and across different tasks. At the same time, advanced algorithms optimize the use of computational power and data, leading to more rapid and efficient AI development. This dynamic interaction fosters continuous improvement and innovation in AI technologies (figure I.10).  




Comments

Popular posts from this blog

Plenary 2: Building Bridges & Scaling Impact.

From Divide to Synergy: AI Capacity Building and Global Cooperation for the Sustainable Development Goals (STI Forum Side Event).

Plenary 3: Targeting Transformation - Africa & Harvesting Innovation - Latin America & Caribbean.