Comparing Reinforcement Learning Algorithms for a Trip Building Task: a Multi-objective Approach Using Non-Local Information

Henrique U. Gobbi1, Guilherme Dytz dos Santos1 and Ana L. C. Bazzan1

  1. Instituto de Informática, Universidade Federal do Rio Grande do Sul (UFRGS)
    Porto Alegre, RS, Brazil
    {hugobbi,gdsantos,bazzan}@inf.ufrgs.br

Abstract

Using reinforcement learning (RL) to support agents in making decisions that consider more than one objective poses challenges. We formulate the problem of multiple agents learning how to travel from A to B as a reinforcement learning task modeled as a stochastic game, in which we take into account: (i) more than one objective, (ii) non-stationarity, (iii) communication of local and non-local information among the various actors. We use and compare RL algorithms, both for the single objective (Q-learning), as well as for multiple objectives (Pareto Q-learning), with and without non-local communication. We evaluate these methods in a scenario in which hundreds of agents have to learn how to travel from their origins to their destinations, aiming at minimizing their travel times, as well as the carbon monoxide vehicles emit. Results show that the use of non-local communication reduces both travel time and emissions.

Key words

reinforcement learning, multi-agent systems, multi-objective reinforcement learning, route choice

Digital Object Identifier (DOI)

https://doi.org/10.2298/CSIS221210072G

Publication information

Volume 21, Issue 1 (January 2024)
Year of Publication: 2024
ISSN: 2406-1018 (Online)
Publisher: ComSIS Consortium

Full text

DownloadAvailable in PDF
Portable Document Format

How to cite

Gobbi, H. U., Santos, G. D. d., Bazzan, A. L. C.: Comparing Reinforcement Learning Algorithms for a Trip Building Task: a Multi-objective Approach Using Non-Local Information. Computer Science and Information Systems, Vol. 21, No. 1, 291–308. (2024), https://doi.org/10.2298/CSIS221210072G