Mostrar registro simples

dc.contributor.advisorBazzan, Ana Lucia Cetertichpt_BR
dc.contributor.authorAnquise, Candy Alexandra Huancapt_BR
dc.date.accessioned2021-11-17T04:24:22Zpt_BR
dc.date.issued2021pt_BR
dc.identifier.urihttp://hdl.handle.net/10183/231836pt_BR
dc.description.abstractMulti-objective decision-making entails planning based on a model to find the best policy to solve such problems. If this model is unknown, learning through interaction provides the means to behave in the environment. Multi-objective decision-making in a multi-agent system poses many unsolved challenges. Among them, multiple objectives and non-stationarity, caused by simultaneous learners, have been addressed separately so far. In this work, algorithms that address these issues by taking strengths from different methods are proposed and applied to a route choice scenario formulated as a multi-armed bandit problem. Therefore, the focus is on action selection. In the route choice problem, drivers must select a route while aiming to minimize both their travel time and toll. The proposed algorithms take and combine important aspects from works that tackle only one issue: non-stationarity or multiple objectives, making possible to handle these problems together. The methods used from these works are a set of Upper-Confidence Bound (UCB) algorithms and the Pareto Q-learning (PQL) algorithm. The UCB-based algorithms are Pareto UCB1 (PUCB1), the discounted UCB (DUCB) and sliding window UCB (SWUCB). PUCB1 deals with multiple objectives, while DUCB and SWUCB address non-stationarity in different ways. PUCB1 was extended to include characteristics from DUCB and SWUCB. In the case of PQL, as it is a state-based method that focuses on more than one objective, a modification was made to tackle a problem focused on action selection. Results obtained from a comparison in a route choice scenario show that the proposed algorithms deal with non-stationarity and multiple objectives, while using a discount factor is the best approach. Advantages, limitations and differences of these algorithms are discussed.en
dc.format.mimetypeapplication/pdfpt_BR
dc.language.isoengpt_BR
dc.rightsOpen Accessen
dc.subjectMulti-objectiveen
dc.subjectSistemas multiagentespt_BR
dc.subjectAprendizagempt_BR
dc.subjectDecision-makingen
dc.subjectMulti-objective route choiceen
dc.subjectReinforcement learningen
dc.titleMulti-objective reinforcement learning methods for action selection : dealing with multiple objectives and non-stationaritypt_BR
dc.typeDissertaçãopt_BR
dc.identifier.nrb001133526pt_BR
dc.degree.grantorUniversidade Federal do Rio Grande do Sulpt_BR
dc.degree.departmentInstituto de Informáticapt_BR
dc.degree.programPrograma de Pós-Graduação em Computaçãopt_BR
dc.degree.localPorto Alegre, BR-RSpt_BR
dc.degree.date2021pt_BR
dc.degree.levelmestradopt_BR


Thumbnail
   

Este item está licenciado na Creative Commons License

Mostrar registro simples