Fortifying federated learning: security against model poisoning attacks
This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024.
Hauptverfasser: | , , , |
---|---|
Weitere Verfasser: | |
Format: | Abschlussarbeit |
Sprache: | English |
Veröffentlicht: |
Brac University
2024
|
Schlagworte: | |
Online Zugang: | http://hdl.handle.net/10361/22774 |
id |
10361-22774 |
---|---|
record_format |
dspace |
spelling |
10361-227742024-05-08T21:02:10Z Fortifying federated learning: security against model poisoning attacks Anan, Fabiha Mamun, Kazi Shahed Kamal, Md Sifat Ahsan, Nizbath Hossain, Muhammad Iqbal Reza, Md. Tanzim Department of Computer Science and Engineering, Brac University Model poisoning Federated learning Machine learning Deep learning Computer networks--Security measures This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024. Cataloged from PDF version of thesis. Includes bibliographical references (pages 39-41). Distributed machine learning advancements have the potential to transform future networking systems and communications. An effective framework for machine learning has been made possible by the introduction of Federated Learning (FL) and due to its decentralized nature it has some poisoning issues. Model poisoning attacks are one of them that significantly affect FL’s performance. Model poisoning mainly defines the replacement of a functional model with a poisoned model by injecting poison into models in the training period. The model’s boundary typically alters in some way as a result of a poisoning attack, which leads to unpredictability in the model outputs. Federated learning provides a mechanism to unleash data to fuel new AI applications by training AI models without letting someone see or access anyone’s confidential data. Currently, there are many algorithms that are being used for defending model poisoning in federated learning. Some of them are really efficient but most of them have lots of issues that don’t make the federated learning system properly secured. So in this study, we have highlighted the main issues of these algorithms and provided a defense mechanism that is capable of defending model poisoning in federated learning. Fabiha Anan Kazi Shahed Mamun Md Sifat Kamal Nizbath Ahsan B.Sc. in Computer Science 2024-05-08T05:27:43Z 2024-05-08T05:27:43Z ©2024 2024-01 Thesis ID: 20101085 ID: 20301471 ID: 20101231 ID: 23341119 http://hdl.handle.net/10361/22774 en Brac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. 49 pages application/pdf Brac University |
institution |
Brac University |
collection |
Institutional Repository |
language |
English |
topic |
Model poisoning Federated learning Machine learning Deep learning Computer networks--Security measures |
spellingShingle |
Model poisoning Federated learning Machine learning Deep learning Computer networks--Security measures Anan, Fabiha Mamun, Kazi Shahed Kamal, Md Sifat Ahsan, Nizbath Fortifying federated learning: security against model poisoning attacks |
description |
This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024. |
author2 |
Hossain, Muhammad Iqbal |
author_facet |
Hossain, Muhammad Iqbal Anan, Fabiha Mamun, Kazi Shahed Kamal, Md Sifat Ahsan, Nizbath |
format |
Thesis |
author |
Anan, Fabiha Mamun, Kazi Shahed Kamal, Md Sifat Ahsan, Nizbath |
author_sort |
Anan, Fabiha |
title |
Fortifying federated learning: security against model poisoning attacks |
title_short |
Fortifying federated learning: security against model poisoning attacks |
title_full |
Fortifying federated learning: security against model poisoning attacks |
title_fullStr |
Fortifying federated learning: security against model poisoning attacks |
title_full_unstemmed |
Fortifying federated learning: security against model poisoning attacks |
title_sort |
fortifying federated learning: security against model poisoning attacks |
publisher |
Brac University |
publishDate |
2024 |
url |
http://hdl.handle.net/10361/22774 |
work_keys_str_mv |
AT ananfabiha fortifyingfederatedlearningsecurityagainstmodelpoisoningattacks AT mamunkazishahed fortifyingfederatedlearningsecurityagainstmodelpoisoningattacks AT kamalmdsifat fortifyingfederatedlearningsecurityagainstmodelpoisoningattacks AT ahsannizbath fortifyingfederatedlearningsecurityagainstmodelpoisoningattacks |
_version_ |
1814308268927877120 |