Improving Transformer-Kernel Ranking Model Using Conformer and Query Term Independence

Type: Article

Publication Date: 2021-07-11

Citations: 5

DOI: https://doi.org/10.1145/3404835.3463049

Abstract

The Transformer-Kernel (TK) model has demonstrated strong reranking performance on the TREC Deep Learning benchmark---and can be considered to be an efficient (but slightly less effective) alternative to other Transformer-based architectures that employ (i) large-scale pretraining (high training cost), (ii) joint encoding of query and document (high inference cost), and (iii) larger number of Transformer layers (both high training and high inference costs). Since, a variant of the TK model---called TKL---has been developed that incorporates local self-attention to efficiently process longer input sequences in the context of document ranking. In this work, we propose a novel Conformer layer as an alternative approach to scale TK to longer input sequences. Furthermore, we incorporate query term independence and explicit term matching to extend the model to the full retrieval setting. We benchmark our models under the strictly blind evaluation setting of the TREC 2020 Deep Learning track and find that our proposed architecture changes lead to improved retrieval quality over TKL. Our best model also outperforms all non-neural runs ("trad") and two-thirds of the pretrained Transformer-based runs ("nnlm") on [email protected]

Locations

  • arXiv (Cornell University) - View - PDF
  • Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval - View

Similar Works

Action Title Year Authors
+ Improving Transformer-Kernel Ranking Model Using Conformer and Query Term Independence 2021 Bhaskar Mitra
Sebastian Hofstätter
Hamed Zamani
Nick Craswell
+ Conformer-Kernel with Query Term Independence at TREC 2020 Deep Learning Track 2020 Bhaskar Mitra
Sebastian Hofstätter
Hamed Zamani
Nick Craswell
+ Improving Neural Ranking Models with Traditional IR Methods 2023 Anik Saha
Oktie Hassanzadeh
Alex Gittens
Jian Ni
Kavitha Srinivas
Bülent Yener
+ On the Interpolation of Contextualized Term-based Ranking with BM25 for Query-by-Example Retrieval 2022 Amin Abolghasemi
Arian Askari
Suzan Verberne
+ Interpretable & Time-Budget-Constrained Contextualization for Re-Ranking 2020 Sebastian Hofstätter
Markus Zlabinger
Allan Hanbury
+ Incorporating Query Term Independence Assumption for Efficient Retrieval and Ranking using Deep Neural Networks 2019 Bhaskar Mitra
Corby Rosset
David Hawking
Nick Craswell
Fernando Díaz
Emine Yılmaz
+ Incorporating Query Term Independence Assumption for Efficient Retrieval and Ranking using Deep Neural Networks 2019 Bhaskar Mitra
Corby Rosset
David Hawking
Nick Craswell
Fernando Díaz
Emine Yılmaz
+ TU Wien @ TREC Deep Learning '19 -- Simple Contextualization for Re-ranking 2019 Sebastian Hofstätter
Markus Zlabinger
Allan Hanbury
+ TU Wien @ TREC Deep Learning '19 -- Simple Contextualization for Re-ranking 2019 Sebastian Hofstätter
Markus Zlabinger
Allan Hanbury
+ Interpretable & Time-Budget-Constrained Contextualization for Re-Ranking. 2020 Sebastian Hofstätter
Markus Zlabinger
Allan Hanbury
+ Quality and Cost Trade-offs in Passage Re-ranking Task 2021 P. Podberezko
Vsevolod Mitskevich
Raman Makouski
P. R. Goncharov
Andrei Khobnia
Nikolay Bushkov
Marina Chernyshevich
+ PDF Chat Quality and Cost Trade-offs in Passage Re-ranking Task 2021 P. Podberezko
Vsevolod Mitskevich
Raman Makouski
P. R. Goncharov
Andrei Khobnia
Nikolay Bushkov
Marina Chernyshevich
+ Quality and Cost Trade-offs in Passage Re-ranking Task 2021 P. Podberezko
Vsevolod Mitskevich
Raman Makouski
Pavel Goncharov
Andrei Khobnia
Nikolay Bushkov
Marina Chernyshevich
+ Interpretable & Time-Budget-Constrained Contextualization for Re-Ranking 2020 Sebastian Hofstätter
Markus Zlabinger
Allan Hanbury
+ Composite Re-Ranking for Efficient Document Search with BERT. 2021 Yingrui Yang
Yifan Qiao
Jinjin Shao
Mayuresh Anand
Xifeng Yan
Tao Yang
+ ED2LM: Encoder-Decoder to Language Model for Faster Document Re-ranking Inference 2022 Kai Hui
Honglei Zhuang
Tao Chen
Zhen Qin
Jing Lü
Dara Bahri
Ji Ma
Jai Prakash Gupta
Cícero Nogueira dos Santos
Yi Tay
+ PDF Chat Efficient Document Ranking with Learnable Late Interactions 2024 Ziwei Ji
Himanshu Jain
Andreas Veit
Sashank J. Reddi
Sadeep Jayasumana
Ankit Singh Rawat
Aditya Krishna Menon
Felix Yu
Sanjiv Kumar
+ Long Document Ranking with Query-Directed Sparse Transformer 2020 Jyun‐Yu Jiang
Chenyan Xiong
Chia-Jung Lee
Wei Wang
+ Learning-to-Rank with BERT in TF-Ranking 2020 Shuguang Han
Xuanhui Wang
Mike Bendersky
Marc Najork
+ Long Document Ranking with Query-Directed Sparse Transformer 2020 Jyun‐Yu Jiang
Chenyan Xiong
Chia-Jung Lee
Wei Wang