Page 17 - Disaster Management: The Standards Perspective
P. 17
Disaster Management: The Standards Perspective
Pioneering Standards: FG-AI4NDM Best Practices in AI for Disaster Management
FG-AI4NDM developed three core Reports which put forth several best practices on leveraging
AI for disaster management.
The AI for Data Report is dedicated to uncovering and defining methodologies for the
comprehensive management of data for disaster risk reduction. The report emphasizes
several best practices for AI/ML data-related processes in disaster management. Key practices
include promoting technologies that enforce legal and ethical principles to avoid harmful
outcomes, ensuring meticulous data selection and processing to maintain reliability and
accuracy, and utilizing data visualization to enhance understanding and transparency of AI/
ML algorithms. It also stresses the importance of managing data quality, quantity, compatibility,
and appropriateness, and provides guidelines for acquiring, managing, and preparing Earth
Observation (EO) data. Additionally, the report highlights the need to address data bias,
standardize data through organizations like the Open Geospatial Consortium (OGC). Open
data and software are encouraged to foster accessibility and collaboration, and the use of
machine-learning operations (MLOps) is recommended to capture the dynamic flow of data
and lifecycle management.
The AI for Modeling Report investigates how AI can enhance modeling across spatiotemporal
scales by extracting complex patterns and deriving insights from increasing volumes of
geospatial data for disaster risk reduction. It also focuses on key aspects such as data preparation
for training, AI development, and evaluation, aiming to refine and advance AI-driven modeling
techniques. The best practices for developing AI models in natural hazard management, as
highlighted in the report, emphasize a context-specific evaluation approach that includes
human discrimination, problem benchmarks, and peer confrontation. It is crucial to use a wide
range of performance metrics such as confusion matrices and Pearson Correlation Coefficient
to ensure robustness, reliability, and explainability. Additionally, addressing issues like data
poisoning and ensuring the scalability and peer review of models are essential to maintain
their accuracy, reliability, and usefulness in high-risk scenarios. The report also underscores the
importance of involving domain experts like meteorologists and emergency responders in the
testing and evaluation phases to ensure the models align with real-world needs and provide
valuable insights for disaster response and recovery.
The AI for Communications Report examines how AI-based communication systems can be
used before, during, and after disasters occur. This report covers various systems such as
alerts, early warning, forecasts, hazard maps, decision support tools, dashboards, and chatbots.
It emphasizes the importance of transparency, advocating for open-source and open-data
approaches and community capacity support in co-creating machine learning projects. It
suggests integrating AI into existing communication frameworks and ensuring high-quality,
representative data aligned with FAIR principles. For decision support systems, it recommends
seamless information sharing and multi-stakeholder coordination. For chatbots, it advises
embedding them into widely used applications and considering local dialects. The report also
highlights the need for standardized warning dissemination protocols, such as the common
alerting protocol (CAP), to ensure effective communication. These practices aim to enhance
public safety, community resilience, and the overall effectiveness of AI-based tools in disaster
management. It further explores the development and implementation of these systems
from both technical and social perspectives, including stakeholder involvement and ethical
considerations.
7