Pre-training language model incorporating domain-specific heterogeneous knowledge into a unified representation

Type: Article

Publication Date: 2022-11-30

Citations: 12

DOI: https://doi.org/10.1016/j.eswa.2022.119369

Locations

  • Expert Systems with Applications - View
  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ PDF Chat Self-training Large Language Models through Knowledge Detection 2024 Yeo Wei Jie
Teddy Ferdinan
Przemysław Kazienko
Ranjan Satapathy
Erik Cambria
+ PDF Chat CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks 2024 Xiaoxi Li
Zhicheng Dou
Yujia Zhou
Fangchao Liu
+ Feature Adaptation of Pre-Trained Language Models across Languages and Domains for Text Classification 2020 Hai Ye
Qingyu Tan
Ruidan He
Juntao Li
Hwee Tou Ng
Lidong Bing
+ Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey 2021 Xiaokai Wei
Shen Wang
Dejiao Zhang
Parminder Bhatia
Andrew O. Arnold
+ How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances 2023 Zihan Zhang
Meng Fang
Ling Chen
Mohammad‐Reza Namazi‐Rad
Jun Wang
+ PDF Chat Pretrained domain-specific language model for natural language processing tasks in the AEC domain 2022 Zhe Zheng
Xinzheng Lu
Keyin Chen
Yucheng Zhou
Jia‐Rui Lin
+ PDF Chat How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances 2023 Zihan Zhang
Meng Fang
Ling Chen
Mohammad‐Reza Namazi‐Rad
Jun Wang
+ A Survey of Knowledge Enhanced Pre-trained Models 2021 Jian Yang
Gang Xiao
Yu-Long Shen
Wei Jiang
Xinyu Hu
Ying Zhang
Jinghui Peng
+ Pre-training Universal Language Representation 2021 Yian Li
Hai Zhao
+ mDAPT: Multilingual Domain Adaptive Pretraining in a Single Model 2021 Rasmus Kær Jørgensen
Mareike Hartmann
Xiang Dai
Desmond Elliott
+ MDAPT: Multilingual Domain Adaptive Pretraining in a Single Model 2021 Rasmus Kær Jørgensen
Mareike Hartmann
Xiang Dai
Desmond Elliott
+ PDF Chat Pre-trained models for natural language processing: A survey 2020 Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
+ PDF Chat Machine Unlearning of Pre-trained Large Language Models 2024 Yao Jin
Eli Chien
Minxin Du
Xinyao Niu
Tianhao Wang
Zezhou Cheng
Xiang Yue
+ Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models’ Memories 2023 Shizhe Diao
Tianyang Xu
Ruijia Xu
Jiawei Wang
Tong Zhang
+ Learning Better Universal Representations from Pre-trained Contextualized Language Models. 2020 Yian Li
Hai Zhao
+ PDF Chat TextGram: Towards a Better Domain-Adaptive Pretraining 2024 Sharayu Hiwarkhedkar
Saloni Mittal
Vidula Magdum
Omkar Dhekane
Raviraj Joshi
Geetanjali Kale
Arnav Ladkat
+ The Life Cycle of Knowledge in Big Language Models: A Survey 2023 Boxi Cao
Hongyu Lin
Xianpei Han
Le Sun
+ A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models 2022 Da Yin
Dong Li
Hao Cheng
Xiaodong Liu
Kai-Wei Chang
Furu Wei
Jianfeng Gao
+ Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models Memories 2023 Shizhe Diao
Tianyang Xu
Ruijia Xu
Jiawei Wang
Tong Zhang
+ A Comprehensive Overview of Large Language Models 2023 Humza Naveed
Asad Ullah Khan
Shi Qiu
Muhammad Saqib
Saeed Anwar
Muhammad Usman
Nick Barnes
Ajmal Mian