Deep reinforcement learning for PID parameter tuning in greenhouse HVAC system energy Optimization: A TRNSYS-Python cosimulation approach
No Thumbnail Available
Date
2024
Journal Title
Journal ISSN
Volume Title
Publisher
Elsevier
Abstract
The control of indoor temperature in greenhouses is crucial as it directly impacts the crop’s thermal comfort and the performance of heating, ventilation, and air-conditioning (HVAC) systems. Conventional feedback controllers, like on/off, can sometimes make HVAC system work at full capacity when only half that
capacity is needed. In contrast, the proportional-integral-derivative (PID) controller, provides precise control based on its P, I, and D parameters. However, it lacks a formal design procedure for optimizing a specified objective function. Previous studies have utilized conventional PID tuning approaches to track room setpoint temperature for residential buildings, data centers, and office buildings, with limited research in greenhouse applications. To address this gap, this study proposes a flexible PID controller that employs a deep reinforcement learning (DRL) algorithm to optimize its parameters, by tracking the setpoints and energy consumption of a greenhouse planted with tomatoes. This approach is different from the typical method of using the trained RL agent directly in HVAC
controls. Through a self-made TRNSYS-Python cosimulation framework, the DRL agent interacts directly and in real time with the greenhouse and its plants. Consequently, optimized PID parameters were established and tested in the simulated environment. The resulting performance, in terms of both energy consumption and its ability to maintain the crop’s comfort temperature, was compared with the simulated on/off and manually tuned PID controllers. Compared to the on/off baseline control, the proposed PID optimized parameters reduce energy use by 8.81% to 12.99%, and the manually tuned PID parameters with the Ziegler-Nichols tuning method reduce energy use by 7.17 %. Additionally, the proposed method had a deviation of 2.07% to 3.13%, while the manually
tuned PID controller and the on/off controller had deviations of 7.27% and 3.27%, respectively, from the minimum comfortable temperature. This study serves as a framework for improving the energy efficiency of greenhouse HVAC system operations.
Description
Keywords
TRNSYS, Python, Optimization, Deep reinforcement learning, Cosimulation