-
Latent Reasoning in LLMs as a Vocabulary-Space Superposition
Paper • 2510.15522 • Published • 3 -
Language Models are Injective and Hence Invertible
Paper • 2510.15511 • Published • 69 -
Eliciting Secret Knowledge from Language Models
Paper • 2510.01070 • Published • 5 -
Interpreting Language Models Through Concept Descriptions: A Survey
Paper • 2510.01048 • Published • 2
Collections
Discover the best community collections!
Collections including paper arxiv:2401.06855
-
Looking for a Needle in a Haystack: A Comprehensive Study of Hallucinations in Neural Machine Translation
Paper • 2208.05309 • Published • 1 -
LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models
Paper • 2305.13711 • Published • 2 -
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
Paper • 2302.09664 • Published • 4 -
BARTScore: Evaluating Generated Text as Text Generation
Paper • 2106.11520 • Published • 2
-
vectara/hallucination_evaluation_model
Text Classification • 0.1B • Updated • 166k • 337 -
notrichardren/HaluEval
Viewer • Updated • 35k • 220 -
TRUE: Re-evaluating Factual Consistency Evaluation
Paper • 2204.04991 • Published • 1 -
Fine-grained Hallucination Detection and Editing for Language Models
Paper • 2401.06855 • Published • 4
-
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 14 -
Distinguishing Ignorance from Error in LLM Hallucinations
Paper • 2410.22071 • Published -
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations
Paper • 2410.18860 • Published • 11 -
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Paper • 2410.11779 • Published • 26
-
Latent Reasoning in LLMs as a Vocabulary-Space Superposition
Paper • 2510.15522 • Published • 3 -
Language Models are Injective and Hence Invertible
Paper • 2510.15511 • Published • 69 -
Eliciting Secret Knowledge from Language Models
Paper • 2510.01070 • Published • 5 -
Interpreting Language Models Through Concept Descriptions: A Survey
Paper • 2510.01048 • Published • 2
-
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 14 -
Distinguishing Ignorance from Error in LLM Hallucinations
Paper • 2410.22071 • Published -
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations
Paper • 2410.18860 • Published • 11 -
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Paper • 2410.11779 • Published • 26
-
Looking for a Needle in a Haystack: A Comprehensive Study of Hallucinations in Neural Machine Translation
Paper • 2208.05309 • Published • 1 -
LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models
Paper • 2305.13711 • Published • 2 -
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
Paper • 2302.09664 • Published • 4 -
BARTScore: Evaluating Generated Text as Text Generation
Paper • 2106.11520 • Published • 2
-
vectara/hallucination_evaluation_model
Text Classification • 0.1B • Updated • 166k • 337 -
notrichardren/HaluEval
Viewer • Updated • 35k • 220 -
TRUE: Re-evaluating Factual Consistency Evaluation
Paper • 2204.04991 • Published • 1 -
Fine-grained Hallucination Detection and Editing for Language Models
Paper • 2401.06855 • Published • 4