Language models can learn implicit multi-hop reasoning, but only if they have lots of training data (bibtex)
by Yuekun Yao, Yupei Du, Dawei Zhu, Michael Hahn*, Alexander Koller*
Reference:
Language models can learn implicit multi-hop reasoning, but only if they have lots of training dataYuekun Yao, Yupei Du, Dawei Zhu, Michael Hahn*, Alexander Koller*Empirical Methods in Natural Language Processing (EMNLP), 2025.
Bibtex Entry:
@inproceedings{yao2025khop,
      title={Language models can learn implicit multi-hop reasoning, but only if they have lots of training data},
      author={Yuekun Yao and Yupei Du and Dawei Zhu and Michael Hahn* and Alexander Koller*},
      year={2025},
	booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
      eprint={2505.17923},
      archivePrefix={arXiv},
      month={january},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.17923},
}
Powered by bibtexbrowser