Page 64 - AI for Good - Impact Report
P. 64

AI for Good



                  Glossary


                  Algorithm: A set of instructions or rules that serve as the foundation of programming and
                  software development, as they dictate how a particular problem should be approached and
                  solved. Algorithms are used to process data, perform calculations, automate reasoning, or carry
                  out other computational tasks.

                  Artificial intelligence (AI): The branch of computer science focused on creating systems capable
                  of performing tasks that typically require human intelligence, such as reasoning, learning,
                  problem-solving, and understanding natural language.
                  Artificial neural networks (ANNs): Computational models inspired by the human brain's
                  structure and function. They consist of interconnected layers of nodes (neurons) that process
                  and learn from data by adjusting the connections' weights through training, enabling tasks like
                  pattern recognition, classification, and prediction.

                  Autonomous systems: Systems capable of performing tasks without human intervention, often
                  relying on AI to make decisions in dynamic and uncertain environments. Examples include self-
                  driving cars and drones.

                  Bias: Systematic errors in a model that lead to unfair or incorrect predictions, often due to
                  imbalanced training data or flawed algorithms.

                  Big data: Large and complex data sets that are difficult to process using traditional data
                  processing techniques. AI and machine learning are often employed to analyze and extract
                  meaningful insights from big data.
                  Deep learning: A subset of machine learning that uses neural networks with many layers (often
                  called deep neural networks) to model complex patterns in data. It is particularly effective for
                  tasks such as image and speech recognition.

                  Explainability: The extent to which the internal mechanics of an AI or machine learning system
                  can be explained in human terms.

                  Foundation models: Large, pre-trained models, typically based on deep learning architectures,
                  that can be fine-tuned for a wide range of downstream tasks. These models are trained on
                  vast datasets and provide a general-purpose understanding that can be adapted for specific
                  applications, such as language processing or image recognition.

                  Generative artificial intelligence (GenAI): A class of artificial intelligence models that can create
                  new content, such as text, images, music, or code, by learning patterns from existing data. Unlike
                  traditional AI models, which primarily focus on recognizing patterns or making predictions
                  based on input data, generative AI is designed to produce novel outputs that resemble the
                  training data.

                  Generative pre-trained transformer (GPT): A type of large language model pre-trained on vast
                  amounts of text data and can generate coherent, contextually relevant text, making them useful
                  for tasks like text completion, translation, and conversation.

                  Hallucination: An occurrence where the AI model generates information or content that is
                  factually incorrect, nonsensical, or not based on the data it was trained, presenting them as
                  though they were true.




                                                           54
   59   60   61   62   63   64   65   66   67   68   69