people
GIFTS Lab (Geospatial Intelligence for Future Technology and Sustainability Lab) & members
Dr. Meiliu Wu (link to profile)
My research is primarily focused on harnessing the power of Geospatial Data Science and GeoAI to address critical challenges in AI debiasing, human mobility, segregation, urban analytics, and environmental and climate studies. My work is essentially driven by a strong commitment to fostering social-environmental sustainability, equity, and justice through geospatial innovations. I am particularly interested in the rapid advancements in foundation models (e.g., ChatGPT) and embeddings (e.g., AlphaEarth Foundations) and how these advancements can transform geospatial applications.
My long-term research goal is to pioneer the development of Geo-Foundation Models (GeoFM), creating a multimodal learning framework that leverages geospatial intelligence from diverse geospatial data sources (e.g., text, images, audio, and video), as well as developing a GeoAI debiasing framework. This work has the potential to significantly enhance geospatial analysis and decision-making, ultimately contributing to more informed and equitable societal outcomes.
Fengjiao Li (PhD student) (link to profile)
My research lies at the intersection of Geospatial Artificial Intelligence (GeoAI), spatial statistics, and machine learning, with a particular focus on developing advanced Graph Neural Network (GNN) models for analyzing spatial and dynamic population processes. Drawing on my statistical training, I aim to create interpretable, robust, and scalable methods for understanding how populations move and how diseases spread over space and time.
My current doctoral project focuses on embedding spatial structures, population mobility, and temporal dynamics into graph neural networks to support health risk analysis. I work on integrating statistical reasoning with AI-driven models to improve epidemic modeling, health risk prediction, and intervention planning.
By combining insights from graph neural networks, statistics, epidemiology, and geospatial data science, I strive to develop data-driven tools that are not only technically rigorous but also practically useful for public health decision-making.
Zhimeng He (PhD student) (link to profile)
I am a PhD student at the School of Geographical & Earth Sciences, University of Glasgow, funded by the China Scholarship Council (CSC). My research primarily focuses on developing advanced deep learning architectures to enhance the extraction of building rooftops from high-resolution orthophotos. I am particularly interested in integrating Transformer architectures, involution, and E-Unet to address the challenge of extracting precise building contours in complex urban environments.
In addition to 2D image-based rooftop extraction, I also work on 3D point cloud classification and refinement for urban scene understanding. My current research involves experimenting with these novel models to improve the accuracy and geometric fidelity of building footprint extraction, particularly on large-scale datasets such as the Waterloo Building Dataset and the WHU Massachusetts Buildings Dataset.
Yuwei (Vivi) Cai (PhD student) (link to profile)
My research lies at the intersection of remote sensing and deep learning, with a particular focus on developing super-resolution (SR) techniques for Earth observation imagery. I am interested in both improving image quality through SR and understanding its impact on downstream geospatial applications such as building footprint extraction and land cover detection.
My current doctoral project aims to build a comprehensive framework for applying SR in remote sensing, alongside designing a new evaluation system that goes beyond traditional image quality metrics to assess how SR influences practical geospatial tasks. This work seeks to bridge the gap between algorithmic advances in SR and their real-world utility, ultimately supporting more accurate and reliable geospatial analysis.
Ayush Dabra (PhD student) (link to profile)
I am a PhD student in the Geospatial Data Science Group at the University of Glasgow. My research focuses on developing multimodal generative AI models for understanding urban spaces.
I earned a BS-MS dual degree in Data Science and Engineering from the Indian Institute of Science Education and Research (IISER), Bhopal, India. My past research involves using very high resolution satellite images, multi-spectral UAV data, street view imagery, and photogrammetry derived digital surface models to understand dense and complex urban landscapes in Indian cities and suggest policy recommendations.
Currently, my primary research objective is to address urban planning challenges by developing empirically derived, data-driven, and AI-empowered policy tools. I leverage geospatial multimodal datasets (e.g., geo-tagged images and sounds) and devleop a range of deep learning techniques for cross-modality generation. The ultimate aim of my work is to contribute to sustainable urban development, enhancing the resilience of our cities and aligning them more effectively with the evolving needs of our society.
Ting Han (Exchange PhD student) (link to profile)
I am a joint Ph.D. student in Geospatial Artificial Intelligence at School of Geographical and Earth Sciences, University of Glasgow. I am currently a third year Ph.D. student in School of Geospatial Engineering and Science, Sun Yat-Sen University, majoring in Cartography and Geographic Information System.
My research interests focus on advancing geospatial artificial intelligence by integrating deep learning theories and methods into 3D point cloud processing for remote sensing applications. I aim to develop interpretable and generalizable AI models that can effectively analyze urban morphology, vegetation structure, and built environments from multi-source geospatial data. My work emphasizes the synergy of spatial cognition and data-driven intelligence, promoting sustainable development through urban analytics, environmental monitoring, and human-centered geoinformation services. I am particularly interested in how multi-modal learning and foundation models can enhance geospatial understanding in both local and global scales.
Hanyi Xiong (Exchange Undergraduate student)
I am currently a year three undergraduate student majoring in Applied AI from the University of Hong Kong. My research is positioned at the intersection of GeoAI and multimodal learning, integrating diverse geospatial modalities (e.g., text, imagery, and geographic coordinates) to extract richer and more holistic geospatial knowledge. My research explores how generative AI and contrastive learning can be developed to align multiple perspectives—ranging from street-view to overhead-view imagery—to support spatial understanding and geolocalization tasks. A central theory behind my work is to utilize geo-location as a semantic bridge to fuse diverse geospatial modalities, enabling robust geospatial representation learning.
My long-term research goal is to develop GeoAI-empowered multimodal foundation models for urban analytics and environmental applications, ultimately fostering geospatial AI systems that are accurate, generalizable, and socially impactful.