Search for a command to run...
Cutting-edge Large Language Models (LLMs) play a crucial role in improving autonomous navigation by offering efficient solutions. While LLMs require powerful computers to operate, security concerns and maintaining a stable connection with the cloud can be challenging due to various factors. To address this issue, we propose a metareasoning approach for edge-cloud collaborative LLM planning which leads to an efficient autonomous navigation. The proposed approach allows the system to seamlessly switch between cloud and edge devices to fulfill the mission even in the event of a lost connection or entering a GPS-denied environment. Moreover, we deploy state-of-the-art LLM models on resource-constrained systems like the NVIDIA Jetson Orin Nano 8GB, integrated with ROSMASTER X3. These LLMs have demonstrated exceptional utility in dynamic planning for multi-room or maze environments. A comprehensive LLM profiling of TinyLLM models was performed for five different LLMs. The LLM profiling result shows that while certain models with smaller sizes and lower power consumption were available, their accuracy was insufficient for our application requirements. As a result, LLaMa2-7B is considered the edge LLM model due to its optimal balance of performance and accuracy. The experimental results show that under weak signal conditions (< −50 dB), the metareasoning approach improves energy consumption by up to 4x while the cloud-based implementation exceeds the energy consumption of the onboard LLM implementation. Moreover, with delays of 10-20 seconds, cloud implementation becomes impractical for real-time applications in weak signal environments. This underscores the need for metareasoning, which optimizes energy consumption and response time, providing a balanced solution by adapting to signal strength. A real-world implementation of the proposed approach on ROSMASTER X3 with NVIDIA Jetson Orin Nano board can be found in this video which shows that the mission was completed despite losing the connection with cloud-based LLM.