Advances in Pre-Training Distributed Word Representations

Type: Preprint

Publication Date: 2017-01-01

Citations: 521

DOI: https://doi.org/10.48550/arxiv.1712.09405

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View

Works That Cite This (166)

Action Title Year Authors
+ PDF Chat Named Entity Recognition Architecture Combining Contextual and Global Features 2021 Hanh Thi Hong Tran
Antoine Doucet
Nicolas Sidère
José G. Moreno
Senja Pollak
+ PDF Chat TiBERT: Tibetan Pre-trained Language Model 2022 Sisi Liu
Junjie Deng
Yuan Sun
Xiaobing Zhao
+ PDF Chat Rethinking End-to-End Evaluation of Decomposable Tasks: A Case Study on Spoken Language Understanding 2021 Siddhant Arora
Alissa Ostapenko
Vijay Viswanathan
Siddharth Dalmia
Florian Metze
Shinji Watanabe
Alan W. Black
+ CamemBERT: a Tasty French Language Model 2020 Louis Martin
Benjamin Müller
Pedro Ortiz Suarez
Yoann Dupont
Laurent Romary
Éric Villemonte de la Clergerie
Djamé Seddah
Benoît Sagot
+ PDF Chat Deep Reinforcement Learning 2019 Chong Li
+ CoDA21: Evaluating Language Understanding Capabilities of NLP Models With Context-Definition Alignment 2022 Lütfi Kerem Senel
Timo Schick
Hinrich Schuetze
+ Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge 2022 Danny Merkx
Stefan L. Frank
Mirjam Ernestus
+ Handling big tabular data of ICT supply chains: a multi-task, machine-interpretable approach 2022 Bin Xiao
Murat Şimşek
Burak Kantarcı
Ala Abu Alkheir
+ Ghmerti at SemEval-2019 Task 6: A Deep Word- and Character-based Approach to Offensive Language Identification 2019 Ehsan Doostmohammadi
Hossein Sameti
Ali Saffar
+ Novel embeddings improve the prediction of risk perception 2023 Z. Hussain
Rui Mata
Dirk U. Wulff

Works Cited by This (0)

Action Title Year Authors