Fairness in machine learning : an empirical experiment about protected features and their implications
dc.contributor.advisor | Barone, Dante Augusto Couto | pt_BR |
dc.contributor.author | Guntzel, Maurício Holler | pt_BR |
dc.date.accessioned | 2022-07-22T04:53:48Z | pt_BR |
dc.date.issued | 2022 | pt_BR |
dc.identifier.uri | http://hdl.handle.net/10183/245286 | pt_BR |
dc.description.abstract | Increasingly, machine learning models perform high-stakes decisions in almost any do main. These models and the datasets - they are trained on– may be prone to exacerbating social disparities due to unmitigated fairness issues. For example, features representing different social groups are known as protected features– as stated by Equality Act of 2010; they correspond to one of these fairness issues. This work explores the impact of protected features on predictive models’ outcomes and their performance and fairness. We propose a knowledge-driven pipeline for detecting protected features and mitigating their effect. Protected features are defined based on metadata and are removed during the training phase of the models. Nevertheless, these protected features are merged into the output of the models to preserve the original dataset information and enhance explainability. We empirically study four machine learning models (i.e., KNN, Decision Tree, Neural Net work, and Naive Bayes) and datasets for fairness benchmarking (i.e., COMPAS, Adult Census Income, and Credit Card Default). The observed results suggest that the proposed pipeline preserves the models’ performance and facilitate the extraction of information of the models’ to use in fairness metrics. | en |
dc.format.mimetype | application/pdf | pt_BR |
dc.language.iso | eng | pt_BR |
dc.rights | Open Access | en |
dc.subject | Aprendizado de máquina | pt_BR |
dc.subject | Pipeline | en |
dc.subject | Oleoduto | pt_BR |
dc.subject | fairness | en |
dc.subject | machine learning | en |
dc.subject | Big data | pt_BR |
dc.subject | positive outcome | en |
dc.subject | group fairness | en |
dc.subject | Fairness Though Unawareness | en |
dc.title | Fairness in machine learning : an empirical experiment about protected features and their implications | pt_BR |
dc.type | Trabalho de conclusão de graduação | pt_BR |
dc.contributor.advisor-co | Côrtes, Eduardo Gabriel | pt_BR |
dc.identifier.nrb | 001146016 | pt_BR |
dc.degree.grantor | Universidade Federal do Rio Grande do Sul | pt_BR |
dc.degree.department | Instituto de Informática | pt_BR |
dc.degree.local | Porto Alegre, BR-RS | pt_BR |
dc.degree.date | 2022 | pt_BR |
dc.degree.graduation | Ciência da Computação: Ênfase em Ciência da Computação: Bacharelado | pt_BR |
dc.degree.level | graduação | pt_BR |
Este item está licenciado na Creative Commons License
-
TCC Ciência da Computação (1025)