Mitigation of hallucination and interpretations of self attention of Mistral 7B AI to analyze and visualize context understanding ability of large language models
This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024.
मुख्य लेखकों: | , , , , |
---|---|
अन्य लेखक: | |
स्वरूप: | थीसिस |
भाषा: | English |
प्रकाशित: |
Brac University
2024
|
विषय: | |
ऑनलाइन पहुंच: | http://hdl.handle.net/10361/22762 |
id |
10361-22762 |
---|---|
record_format |
dspace |
spelling |
10361-227622024-05-07T21:04:57Z Mitigation of hallucination and interpretations of self attention of Mistral 7B AI to analyze and visualize context understanding ability of large language models Taki, S.M. Abrar Mustakim Kar, Showmick Niloy, Soumik Deb Rakib, Mazharul Islam Biswas, Abdullah Al Nahid Sadeque, Farig Yousuf Department of Computer Science and Engineering, Brac University Mistral 7B AI Large language model Self attention Black-BoxNLP Neural networks (Computer science) Artificial intelligence This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024. Cataloged from PDF version of thesis. Includes bibliographical references (pages 78-83). In recent years, Large Language Models(LLM) have shown excellent performance in a variety of Natural Language Processing tasks. However, they often produce hallucinated content. Contents that are seemingly correct and make sense linguistically, but are factually incorrect. Since researchers have started working on LLM hallucinations very recently, the problem of mitigating hallucination and understanding which factors play a role in correcting hallucinated content is relatively new. In this paper, we modified a multi-step pipeline called ’Chain of Verification’ that reduces hallucination in Large Language Models by itself without having to feed in external resources. This method is particularly useful for reasoning and reading comprehension types of language tasks. In addition, we extracted the decoder layers of an large language model Mistral 7B to interpret and analyze how the correction was done under the hood. A custom attention weight pruning method was used to prune the defective layers and after pruning, the LLM model passed 3/4 test cases to give proper and correct output results. S.M. Abrar Mustakim Taki Showmick Kar Soumik Deb Niloy Mazharul Islam Rakib Abdullah Al Nahid Biswas B.Sc. in Computer Science 2024-05-07T08:58:35Z 2024-05-07T08:58:35Z ©2024 2024-01 Thesis ID: 20301125 ID: 20301177 ID: 20301207 ID: 20101408 ID: 20301024 http://hdl.handle.net/10361/22762 en Brac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. 84 pages application/pdf Brac University |
institution |
Brac University |
collection |
Institutional Repository |
language |
English |
topic |
Mistral 7B AI Large language model Self attention Black-BoxNLP Neural networks (Computer science) Artificial intelligence |
spellingShingle |
Mistral 7B AI Large language model Self attention Black-BoxNLP Neural networks (Computer science) Artificial intelligence Taki, S.M. Abrar Mustakim Kar, Showmick Niloy, Soumik Deb Rakib, Mazharul Islam Biswas, Abdullah Al Nahid Mitigation of hallucination and interpretations of self attention of Mistral 7B AI to analyze and visualize context understanding ability of large language models |
description |
This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024. |
author2 |
Sadeque, Farig Yousuf |
author_facet |
Sadeque, Farig Yousuf Taki, S.M. Abrar Mustakim Kar, Showmick Niloy, Soumik Deb Rakib, Mazharul Islam Biswas, Abdullah Al Nahid |
format |
Thesis |
author |
Taki, S.M. Abrar Mustakim Kar, Showmick Niloy, Soumik Deb Rakib, Mazharul Islam Biswas, Abdullah Al Nahid |
author_sort |
Taki, S.M. Abrar Mustakim |
title |
Mitigation of hallucination and interpretations of self attention of Mistral 7B AI to analyze and visualize context understanding ability of large language models |
title_short |
Mitigation of hallucination and interpretations of self attention of Mistral 7B AI to analyze and visualize context understanding ability of large language models |
title_full |
Mitigation of hallucination and interpretations of self attention of Mistral 7B AI to analyze and visualize context understanding ability of large language models |
title_fullStr |
Mitigation of hallucination and interpretations of self attention of Mistral 7B AI to analyze and visualize context understanding ability of large language models |
title_full_unstemmed |
Mitigation of hallucination and interpretations of self attention of Mistral 7B AI to analyze and visualize context understanding ability of large language models |
title_sort |
mitigation of hallucination and interpretations of self attention of mistral 7b ai to analyze and visualize context understanding ability of large language models |
publisher |
Brac University |
publishDate |
2024 |
url |
http://hdl.handle.net/10361/22762 |
work_keys_str_mv |
AT takismabrarmustakim mitigationofhallucinationandinterpretationsofselfattentionofmistral7baitoanalyzeandvisualizecontextunderstandingabilityoflargelanguagemodels AT karshowmick mitigationofhallucinationandinterpretationsofselfattentionofmistral7baitoanalyzeandvisualizecontextunderstandingabilityoflargelanguagemodels AT niloysoumikdeb mitigationofhallucinationandinterpretationsofselfattentionofmistral7baitoanalyzeandvisualizecontextunderstandingabilityoflargelanguagemodels AT rakibmazharulislam mitigationofhallucinationandinterpretationsofselfattentionofmistral7baitoanalyzeandvisualizecontextunderstandingabilityoflargelanguagemodels AT biswasabdullahalnahid mitigationofhallucinationandinterpretationsofselfattentionofmistral7baitoanalyzeandvisualizecontextunderstandingabilityoflargelanguagemodels |
_version_ |
1814309313868464128 |