TLDR: We proposed HalluEditBench to holistically benchmark knowledge editing methods in correcting real-world hallucinations on five dimensions including Efficacy, Generalization, Portability, Locality, and Robustness. We find their effectiveness could be far from what their performance on existing datasets suggests, and the performance beyond Efficacy for all methods is generally unsatisfactory.
Large Language Models (LLMs) suffer from hallucinations, referring to the non-factual information in generated content, despite their superior capacities across tasks. Meanwhile, knowledge editing has been developed as a new popular paradigm to correct the erroneous factual knowledge encoded in LLMs with the advantage of avoiding retraining from scratch. However, one common issue of existing evaluation datasets for knowledge editing is that they do not ensure LLMs actually generate hallucinated answers to the evaluation questions before editing. When LLMs are evaluated on such datasets after being edited by different techniques, it is hard to directly adopt the performance to assess the effectiveness of different knowledge editing methods in correcting hallucinations. Thus, the fundamental question remains insufficiently validated: Can knowledge editing really correct hallucinations in LLMs? We proposed HalluEditBench to holistically benchmark knowledge editing methods in correcting real-world hallucinations. First, we rigorously construct a massive hallucination dataset with 9 domains, 26 topics, and more than 6,000 hallucinations. Then, we assess the performance of knowledge editing methods in a holistic way on five dimensions including Efficacy, Generalization, Portability, Locality, and Robustness. Through HalluEditBench, we have provided new insights into the potentials and limitations of different knowledge editing methods in correcting hallucinations, which could inspire future improvements and facilitate progress in the field of knowledge editing.
Insight 1: (1) The current assessment of knowledge editing could be unreliable; (2) ICE and GRACE outperform parameter-modifying editing techniques such as fine-tuning and "Locate-then-Edit" methods on Efficacy; (3) Domains and LLMs could have a high impact on Efficacy.
Efficacy Scores of Knowledge Editing Methods. The "overall" refers to the Efficacy Score (%) on the whole HalluEditBench embracing 9 domains for different methods. The Efficacy Score on each domain is also reported. Efficacy scores (%) are measured by the accuracy on Efficacy Evaluation Question-answer Pairs, where the pre-edit scores of each LLM are ensured 0%.
Insight 2: (1) ICE outperforms other methods on Generalization; (2) All editing methods except ICE only marginally improve or negatively impact the Generalization performance.
Generalization Scores of Knowledge Editing Methods. Generalization Scores (%) are measured by accuracy on five types of Generalization Evaluation Questions including Rephrased Questions ("rephrase"), Yes-or-No Questions with "Yes" or "No" as answers ("yes" or "no"), Multi-Choice Questions ("mc"), Reversed Questions ("reversed"). The "average" refers to averaged scores over five question types. The figure only shows the overall Generalization Scores for each type on the whole HalluEditBench. Generalization Scores for each domain are given in Appendix.
Insight 3: (1) ICE outperforms other methods on Portability; (2) Editing techniques except ICE even underperform pre-edit LLMs on Portability.
Portability Scores of Knowledge Editing Methods. Portability Scores (%) are measured by the accuracy on Portability Evaluation Questions, which are Efficacy Evaluation Questions with N hops (N = 1 ~ 6). The Portability Evaluation Questions are the same as Efficacy Evaluation Questions when N is 1. The Portability Scores on two domains "human" and "places" are reported in the figure. The results for more domains are given in Appendix. The "overall" refers to the Portability Score (%) on the whole HalluEditBench embracing 9 domains.
Insight 4: (1) FT-M and ICE surpass others on Locality performance; (2) Domains have a large impact on the Locality performance of ICE.
Locality Scores of Knowledge Editing Methods. Locality Scores (%) are measured by the unchanging rate on Locality Evaluation Questions after applying knowledge editing methods on LLMs. A higher Locality Score indicates that there is a higher percentage of LLMs' answers to the unrelated questions keeping the same and a less side effect on general knowledge in LLMs. The "overall" refers to the Locality Score (%) on the whole HalluEditBench embracing 9 domains for different methods. The Locality Score on each domain is also reported in the figure.
Insight 5: (1) ICE has a poor Robustness performance compared to other methods; (2) The Robustness performance of knowledge editing techniques in correcting hallucinations could highly depend on LLMs.
Robustness Scores of Knowledge Editing Methods. Robustness Scores are calculated by the accuracy on Robustness Evaluation Questions with M turns (M = 1 ~ 10). We regard Efficacy Scores as the Robustness Scores when M is 0. The Robustness Scores on two domains "human" and "places" are reported in the figure. The results for more domains are given in Appendix. The "overall" refers to the Robustness Score (%) on the whole HalluEditBench embracing 9 domains.
@inproceedings{huang2025halluedit,
title = {Can Knowledge Editing Really Correct Hallucinations?},
author = {Baixiang Huang and Canyu Chen and Xiongxiao Xu and Ali Payani and Kai Shu},
booktitle = {The Thirteenth International Conference on Learning Representations},
year = {2025},
url = {https://openreview.net/forum?id=hmDt068MoZ}
}
TLDR: We propose to reformulate knowledge editing as a new type of safety threat for LLMs, namely Editing Attack, and discover its emerging risk of injecting misinformation or bias into LLMs stealthily, indicating the feasibility of disseminating misinformation or bias with LLMs as new channels.
Knowledge editing has been developed as a new paradigm to correct the erroneous factual knowledge encoded in large language models (LLMs) with the advantage of avoiding retraining from scratch. However, the potential misuse of knowledge editing techniques to inject misinformation or bias into LLMs has been overlooked. In this paper, we propose to reformulate knowledge editing as a new type of safety threat for LLMs, namely Editing Attack. We first systematically categorize editing attacks into Misinformation Injection and Bias Injection based on the type of harmful content injected. Then, we conduct a comprehensive evaluation of the effectiveness of editing attacks on three LLMs with seven knowledge editing methods. Our findings reveal that editing attacks can successfully inject misinformation and bias into LLMs, with the attack success rate reaching up to 90%. Moreover, we discover that the injected misinformation and bias can be generalized to different question formats and can be transferred to other LLMs, indicating the feasibility of disseminating misinformation or bias with LLMs as new channels. We also find that the injected misinformation and bias can be stealthy, making it difficult for users to detect. Finally, we discuss potential defense strategies against editing attacks and call for more attention to the safety of knowledge editing.
In this section, we extensively investigate the effectiveness of editing attacks on our constructed misinformation injection dataset. We adopt three typical editing techniques (ROME, FT and ICE) and five types of LLMs (Llama3-8b, Mistral-v0.1-7b (or -v0.2-7b), Alpaca-7b, Vicuna-7b).
As shown in Table 1, we can observe a performance increase for all editing methods and LLMs over three metrics, indicating that both commonsense and long-tail misinformation can be injected into LLMs with editing attacks. Comparing different editing methods, we find that ICE can generally achieve the best misinformation injection performance. Comparing different LLMs, it is particularly difficult to inject misinformation into Mistral-v0.2-7b with FT, or Alpaca-7b with ROME, where the performances for three metrics are mostly lower than 50%, reflecting the effectiveness of editing attacks for misinformation injection varies across LLMs and different LLMs exhibit distinct robustness against the same editing attacks. Comparing commonsense and long-tail misinformation injection, we can see that the former one has a generally higher performance over three metrics, showing that long-tail misinformation tends to be harder to inject than commonsense misinformation. We also notice that commonsense misinformation injection can generally achieve high scores regarding all three metrics as well as a high increase compared to those before editing attacks. For example, ROME has gained 90.0%, 70.0% and 72.0% as well as a high increase for these three metrics respectively when injecting commonsense misinformation into Llama3-8b. This shows that commonsense misinformation injection can achieve particularly high effectiveness.
Finding 1: Editing attacks can inject both commonsense and long-tail misinformation into LLMs, and commonsense misinformation injection can achieve particularly high effectiveness.
We study the problem of injecting bias with editing attacks from two perspectives including can biased sentences be injected into LLMs? and can one single bias injection subvert the general fairness of LLMs? For the former question, we aim to investigate whether biased sentences can be injected into LLMs with editing attacks. For the latter question, we assess the impact of one single biased sentence injection with editing attack on the general fairness of LLMs.
From Table 2, we can also observe a performance increase for the three kinds of editing methods on all LLMs regarding the two metrics and the generally high scores for gender (or race) bias injection, showing that three kinds of editing attacks (ROME, FT, and ICE) can inject biased sentences towards gender or race into LLMs with high effectiveness. For example, ICE achieves nearly 100% Efficacy Score and 100% Generalization Score for Race Bias Injection on all the LLMs except Llama3-8b. Comparing different LLMs, we can observe that the effectiveness of editing attacks for biased sentence injection varies across different LLMs, which shows the distinct robustness of different LLMs against the same type of editing attacks. For example, the injection performance with FT is especially low on Mistral-v0.2-7b, though it is high on other LLMs. We also notice that some LLMs (e.g., Alpaca-7b) have relatively high pre-edit Efficacy Score and Generalization Score and a relatively low performance increase, which indicates that the high bias of original models could impact the effectiveness of editing attacks for biased sentence injection.
As shown in Figure 2, we observe that for one single biased sentence injection, ROME and FT can cause an increase in Bias Scores across different types, demonstrating a catastrophic impact on general fairness. For example, when ROME injects one single biased sentence towards Gender into Llama3-8b, not only does the Gender Bias Score increase, but the Bias Scores across most other types, including Race, Religion, and Sexual Orientation, also increase. Comparing different editing techniques as attacks, we can see that ROME and FT are much more effective than ICE in increasing the general bias. Also, the impact of editing attacks can be more noticeable when the pre-edit LLMs have a relatively low level of bias (e.g., the Race bias).
Finding 2: Editing attacks can not only inject biased sentences into LLMs with high effectiveness, but also increase the bias in general outputs of LLMs with one single biased sentence injection, representing a catastrophic degradation on LLMs' overall fairness.
Stealthiness In practice, malicious actors may aim to inject harm into LLMs while avoiding being noticed by normal users. Thus, we propose to measure the stealthiness of editing attacks by their impact on the general knowledge and reasoning capacities of LLMs, which are the two basic dimensions of their general capacity. As for evaluating the general knowledge of LLMs, following previous works, we adopt two typical datasets BoolQ and NaturalQuestions and test both the pre-edit and post-edit models in a closed-book way. As for the evaluation of reasoning capacities, we assess the mathematical reasoning capacity with GSM8K and semantic reasoning ability with NLI. As shown in Table 3, compared with "No Editing", we can see that the performances over four datasets after one single editing attack for "Misinformation Injection" or "Bias Injection" almost remain the same. The results demonstrate that editing attacks for misinformation or bias injection have minimal impact on the general knowledge or reasoning capacities, reflecting the high stealthiness of editing attacks.
Is It Possible to Defend Editing Attack? In face with the emerging threats of editing attacks, we conduct a preliminary analysis to explore the possibility of defense. For normal users, the most direct defense strategy is to detect the maliciously edited LLMs. Therefore, the problem can be decomposed into two questions including can edited and non-edited LLMs be differentiated? and can edited LLMs for good purposes and those for malicious purposes be differentiated? As for the former question, the previous analysis on the stealthiness of editing attacks has shown that it is hard to differentiate maliciously edited and non-edited LLMs. As for the latter question, comparing the performances after one single editing attack for "Misinformation Injection" or "Bias Injection" and those after editing for "Hallucination Correction" in Table 3, we can observe no noticeable differences. Our preliminary empirical evidence has shed light on the hardness of defending editing attacks for normal users. Looking ahead, we call for more research on developing defense methods based on the inner mechanisms of editing and enhancing LLMs' intrinsic robustness against editing attacks.
Finding 3: Editing attacks have high stealthiness, measured by the impact on general knowledge and reasoning capacities, and are hard to distinguish from knowledge editing for good purposes.
Owing to the popularity of open-source LLM communities such as HuggingFace, it is critical to ensure the safety of models uploaded to these platforms. Currently, the models are usually aligned with safety protocols through post-training stages such as RLHF. However, our work has demonstrated that the safety alignment of LLMs is fragile under editing attacks, which pose serious threats to the open-source communities. Specifically, as for the misinformation injection risk, conventionally, misinformation is disseminated in information channels such as social media. Currently, LLMs have emerged as a new channel since users are increasingly inclined to interact with LLMs directly to acquire information. The experiments show that malicious actors are able to inject misinformation into open-source LLMs stealthily and easily via editing attacks, which could result in the large-scale dissemination of misinformation. Thus, editing attacks may bring a new type of misinformation dissemination risk and escalate the misinformation crisis in the age of LLMs in addition to the existing misinformation generation risk. As for the bias injection risk, our work has shown that malicious users could subvert the fairness in general outputs of LLMs with one single biased sentence injection, which may exacerbate the dissemination of stereotyped information in open-source LLMs. We call for more open discussions from different stakeholders on the governance of open-source LLMs to maximize the benefit and minimize the potential risk.
@article{chen2024editattack,
title = {Can Editing LLMs Inject Harm?},
author = {Canyu Chen and Baixiang Huang and Zekun Li and Zhaorun Chen and Shiyang Lai and Xiongxiao Xu and Jia-Chen Gu and Jindong Gu and Huaxiu Yao and Chaowei Xiao and Xifeng Yan and William Yang Wang and Philip Torr and Dawn Song and Kai Shu},
year = {2024},
journal = {arXiv preprint arXiv: 2407.20224}
}