International Journal of AI and Advanced Computing

Open access
Volume 1 , Issue 1
Research article ● Open access

Spatio-Temporal Variability of Climate Extremes in the Nouhao Sub-Basin, Burkina Faso: A Comprehensive Analysis of Trends and Spatial Patterns (1981-2020)

Noba
Pages 52-59
📄 View PDF Article preview ▼

Abstract

Extreme weather events are attracting growing scientific interest due to their profound and often devastating impacts on natural ecosystems and socio-economic systems, particularly in regions where livelihoods depend heavily on climate-sensitive activities such as rain-fed agriculture. However, despite the vulnerability of the Sahel to climate variability and change, relatively few studies have focused specifically on the detailed analysis of extreme weather events in the sub-national basins of Burkina Faso, a landlocked Sahelian country in West Africa facing significant water resource challenges. This study addresses this gap by providing a comprehensive analysis of the spatio-temporal variability and trends of climate extremes, focusing on both precipitation and temperature parameters, in the Nouhao sub-basin over the forty-year period from 1981 to 2020. The analysis is based on a combination of observational station data obtained from the National Meteorological Agency of Burkina Faso and high-resolution ERA5 reanalysis data from the European Centre for Medium-Range Weather Forecasts. A set of core climate extreme indices, carefully selected from the suite defined by the Expert Team on Climate Change Detection and Indices, was calculated using the specialized RClimdex software package. These indices include consecutive dry days, consecutive wet days, maximum one-day precipitation, and maximum five-day precipitation for rainfall, as well as the percentage of cool nights, warm nights, cool days, and warm days for temperature. Statistical trend analysis was performed using the non-parametric Mann-Kendall test to assess the significance of observed changes, complemented by Sen's slope estimator to quantify the magnitude of these trends. The spatial structure and distribution of the climate extremes across the basin were analyzed using geostatistical interpolation techniques, specifically kriging, implemented within a Geographic Information System framework. The study reveals a complex and nuanced picture of climate variability in the Nouhao sub-basin. The analysis of extreme precipitation indices demonstrates strong decadal variability throughout the study period, but no clear, consistent, or monotonic long-term trend emerges for any of the rainfall-based indices over the full forty years. This suggests that the rainfall regime in this part of the Sahel is characterized more by multi-decadal oscillations and high interannual variability than by a simple, unidirectional shift towards drier or wetter conditions. In stark contrast, the analysis of extreme temperature indices reveals a much clearer and more consistent signal, pointing towards a gradual but definitive warming of the temperature regime across the basin. The findings show a general and statistically discernible trend towards a decrease in cold extremes, as evidenced by declining frequencies of cool nights and cool days, and a concurrent increase in hot extremes, reflected in rising frequencies of warm nights and warm days. This warming trend is particularly pronounced for nighttime temperatures, with the frequency of warm nights showing a statistically significant increase in the most recent decade. The spatial analysis further illustrates these changes, showing how the patterns of temperature extremes have evolved across the basin over the four decades. These results constitute a critical climate signal for the Nouhao sub-basin, providing essential scientific evidence that can inform and guide sustainable water resource management strategies, support the development of effective climate change adaptation plans for local communities, and contribute to broader efforts to enhance resilience in the face of a changing climate.
Research article ● Open access

AI Multi-Agent Reinforcement Learning for Conflict Resolution and Forecasting in International Relations Policy

Miller
Pages 43-51
📄 View PDF Article preview ▼

Abstract

In an increasingly interconnected yet volatile world, the emergence of complex global challenges necessitates innovative approaches to conflict resolution and accurate forecasting of geopolitical developments. Traditional analytical methods in international relations often struggle to capture the dynamic and multifaceted interactions among nation-states and non-state actors, highlighting the urgent need for more sophisticated modelling tools that can account for strategic behaviour, emergent phenomena, and the nuanced interplay of diverse objectives. This paper explores the synergistic integration of Multi-Agent Reinforcement Learning (MARL) and Large Language Models (LLMs) as a novel algorithmic framework designed to advance the fields of proactive diplomacy and evidence-based policymaking. By leveraging the predictive capabilities of MARL within complex international relations simulations and combining them with the nuanced interpretive power of LLMs, this research proposes a comprehensive approach for analyzing geopolitical dynamics, simulating diplomatic negotiations, and optimizing strategic interventions. This integration facilitates a more holistic understanding of complex international phenomena, allowing for the analysis of emergent social outcomes from both macro-level trends and micro-level interactions, thereby illuminating the causal mechanisms underpinning international events and predicting the ramifications of various policy interventions. The proposed framework enables the development of human-like agents capable of executing comprehensive multi-agent missions encompassing strategic planning, goal-oriented negotiation, and sophisticated social reasoning. These LLM-based agents can refine their strategies through self-play and memory augmentation, leading to continuous strategic evolution without direct human intervention and allowing for the rigorous evaluation of policy decisions in a simulated environment before real-world implementation. This approach not only enhances the quantitative assessment of geopolitical factors but also provides rich qualitative insights into individual-level social mechanisms, effectively bridging interpretability and predictability in international relations research. By offering a scalable and adaptable framework for understanding intricate international dynamics, these models empower policymakers to explore a multitude of scenarios and potential policy outcomes in a safe, simulated environment, thereby optimizing diplomatic initiatives for greater efficacy and mitigating unforeseen negative consequences. The continuous refinement of these models, incorporating lessons from real-world events and expert geopolitical analysis, ensures their sustained relevance and accuracy in an ever-evolving international landscape, ultimately contributing to the development of more robust, ethically sound, and effective strategies for fostering global peace and stability.
Research article ● Open access

Fake News Detection Using Machine Learning: A Comprehensive Review of Techniques, Comparative Analysis, and a Novel Hybrid Ensemble Framework

Musa Asif
Pages 32-42
📄 View PDF Article preview ▼

Abstract

The rapid spread of misinformation on social media poses significant threats to democratic processes, public health, and social stability. Automated fake news detection using machine learning has become essential to support fact-checkers and platform moderation. This study presents a systematic comparative analysis of machine learning-based fake news detection approaches published between 2020 and 2025, focusing on classical, hybrid, and deep learning methods. Classical classifiers such as Naïve Bayes, Support Vector Machines, and Decision Trees achieve moderate accuracies (70–86%) but are limited by shallow feature representation and sensitivity to class imbalance. Hybrid and deep learning approaches improve performance (88–91%) but introduce higher computational complexity and resource requirements. Building on this analysis, we propose a hybrid ensemble framework combining Logistic Regression, Random Forest, and XGBoost with TF-IDF feature extraction and Synthetic Minority Oversampling Technique (SMOTE). Experimental evaluation on the FakeNewsNet dataset demonstrates superior performance, achieving 96.96% accuracy, 96.9% F1-score, and an AUC of 0.994. Cross-validation confirms robustness; however, cross-domain testing reveals reduced generalizability (78.3% accuracy), and adversarial evaluation highlights vulnerability to text manipulation. Computational costs are higher than single models, and interpretability decreases due to ensemble complexity. The findings demonstrate that carefully designed ensemble methods can substantially outperform individual classifiers, but challenges related to domain adaptation, adversarial robustness, computational efficiency, and explainability remain critical for real-world deployment. The study provides practical guidance for developing balanced, deployable fake news detection systems and outlines future research directions in cross-domain generalization, model compression, and multimodal integration.
Research article ● Open access

Cloud Computing for Artificial Intelligence: A Comprehensive Review of Infrastructure, Performance Optimization, and Future Directions

Gonzalez Alice
Pages 13-31
📄 View PDF Article preview ▼

Abstract

The rapid advancement of artificial intelligence (AI), particularly deep learning, has generated unprecedented demands for scalable computational infrastructure. Cloud computing has emerged as a critical enabler of modern AI systems by providing elastic scalability, high-performance computing resources, and cost-efficient deployment models. This study presents a comprehensive review and experimental evaluation of the role of cloud computing in supporting scalable and efficient AI workloads. A systematic literature review (2000–2025) was conducted alongside a comparative experimental analysis of three leading cloud platforms—Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure—using standardized machine learning models and datasets. Performance metrics including training time, accuracy, resource utilization, scalability, and cost efficiency were analyzed using one-way ANOVA and post-hoc testing. Results indicate significant differences in training efficiency, with Google Cloud demonstrating the lowest mean training time, followed by AWS and Azure, while model accuracy remained statistically equivalent across platforms. GPU utilization and cost efficiency varied, with preemptible/spot instances reducing costs by up to 70%. Scalability testing showed near-linear performance gains up to 16 GPUs, though Azure exhibited higher variability. Security and compliance capabilities were robust across all platforms. The findings confirm that while model performance is platform-independent, meaningful differences exist in operational efficiency and cost structure. Strategic cloud selection should therefore be guided by workload characteristics, cost considerations, and ecosystem integration rather than accuracy outcomes alone. As AI models continue to scale, the cloud-AI symbiosis will remain foundational to future intelligent systems.
Research article ● Open access

Deep Learning for Image Classification: A Comprehensive Review of Architectures, Methodologies, and Applications

Pages
📄 View PDF Article preview ▼

Abstract

Scroll to Top