A Cooperative Heterogeneous Multi-Agent System Leveraging Deep Reinforcement Learning
DOI:
https://doi.org/10.59543/kadsa.v1i.13931Keywords:
Multi-Agents; Deep Reinforcement Learning; Heterogeneous-AgentsAbstract
The agent plays a pivotal role in the multi-agent environment. Previous studies have primarily focused on homogeneous agent collaboration, but the approaches used in these studies cannot be directly applied to heterogeneous agent collaboration. There is a need for a model that supports collaboration among heterogeneous agents, as real-world systems consist of agents with diverse shapes, functions, and tasks. This study proposes a framework that uses visual and vector observations to facilitate collaboration among heterogeneous agents. The seaport environment is simulated for this study. Visual observations and vector observations are employed to interpret signals from other agents and calculate distance between the agents in the environment. Two types of agents are used: the crane agent, which lifts and lowers heavy materials between the ship and the designated location, and the lifter agent, which transports bulky materials from the seaport to the storeroom. These agents are heterogeneous due to differences in size, capabilities, and tasks. Separate neural networks are trained for each agent. The results show that the proposed model outperforms the base model in terms of mean reward, extrinsic reward, episode length, and value estimation. In terms of value loss and policy loss, the proposed model performs similarly to the base model.





