CausalPrompt: Enhancing LLMs with Weakly Supervised Causal Reasoning for Robust Performance in Non-Language Tasks (Papers Track)

Tung-Wei Lin (University of California, Berkeley); Vanshaj Khattar (Virginia Tech); Yuxuan Huang (University College London); Junho Hong (University of Michigan); Ruoxi Jia (Virginia Tech); Chen-Ching Liu (Virginia Tech); Alberto L Sangiovanni-Vincentelli (University of California, Berkeley); Ming Jin (Virginia Tech)

Poster File Cite
Natural Language Processing Buildings Power & Energy

Abstract

In confronting the pressing issue of climate change, we introduce "CausalPrompt", an innovative prompting strategy that adapts large language models (LLMs) for classification and regression tasks through the application of weakly supervised causal reasoning. We delve into the complexities of data shifts within energy systems, often resulting from the dynamic evolution of sensor networks, leading to discrepancies between training and test data distributions or feature inconsistencies. By embedding domain-specific reasoning in the finetuning process, CausalPrompt significantly bolsters the adaptability and resilience of energy systems to these shifts. We show that CausalPrompt significantly enhances predictions in scenarios characterized by feature shifts, including electricity demand, solar power generation, and cybersecurity within energy infrastructures. This approach underlines the crucial role of CausalPrompt in enhancing the reliability and precision of predictions in energy systems amid feature shifts, highlighting its significance and potential for real-world applications in energy management and cybersecurity, contributing effectively to climate change mitigation efforts.