联系客服
客服二维码

联系客服获取更多资料

微信号:LingLab1

客服电话:010-82185409

意见反馈
关注我们
关注公众号

关注公众号

linglab语言实验室

回到顶部
Arxiv NLP 0114 Papers

302 阅读 2020-07-14 14:54:02 上传

以下文章来源于 语言学之妙

今日重点:
Alibaba 1 篇:[3]
Carnegie Mellon University 1 篇:[8]
Google 3 篇:[10], [13], [21]
Berkeley 1 篇:[13]
[1]. Multi-Source Domain Adaptation for Text Classification via DistanceNet-Bandits
https://arxiv.org/abs/2001.04362
Han Guo, Ramakanth Pasunuru, Mohit Bansal
AAAI 2020 (9 pages)
[2]. CLUENER2020: Fine-grained Name Entity Recognition for Chinese
https://arxiv.org/abs/2001.04351
Liang Xu, Qianqian Dong, Cong Yu, Yin Tian, Weitang Liu, Lu Li, Xuanwei Zhang
CLUE Organization
6 pages, 5 tables, 1 figure
[3]. AdaBERT: Task-Adaptive BERT Compression with Differentiable Neural Architecture Search
https://arxiv.org/abs/2001.04246
Daoyuan Chen, Yaliang Li, Minghui Qiu, Zhen Wang, Bofang Li, Bolin Ding, Hongbo Deng, Jun Huang, Wei Lin, Jingren Zhou
Alibaba Group
[4]. Mining customer product reviews for product development: A summarization process
https://arxiv.org/abs/2001.04200
Tianjun Hou, Bernard Yannou, Yann Leroy, Emilie Poirson
[5]. Joint Reasoning for Multi-Faceted Commonsense Knowledge
https://arxiv.org/abs/2001.04170
Yohan Chalier, Simon Razniewski, Gerhard Weikum
T´el´ecom ParisTech
11 pages
[6]. ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
https://arxiv.org/abs/2001.04063
Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou
[7]. Stochastic Natural Language Generation Using Dependency Information
https://arxiv.org/abs/2001.03897
Elham Seifossadat, Hossein Sameti
Sharif University of Technology; our model on nine domains from tabular, dialogue act and RDF format. Our model outperforms; the corpus-based state-of-the-art methods trained on tabular datasets and also achieves compa-; rable results with neural network-based approaches trained on dialogue act, EE and WebNLG; datasets for BLEU and ERR evaluation metrics. Also, by reporting Human Evaluation results,; we show that our model produces high-quality utterances in aspects of informativeness and; naturalness as well as quality.
[8]. Rethinking Generalization of Neural Models: A Named Entity Recognition Case Study
https://arxiv.org/abs/2001.03844
Jinlan Fu, Pengfei Liu, Qi Zhang, Xuanjing Huang
(cid:) Language Technologies Institute, Carnegie Mellon University,
Accepted by AAAI 2020
[9]. Revisiting Challenges in Data-to-Text Generation with Fact Grounding
https://arxiv.org/abs/2001.03830
Hongmin Wang
University of California Santa Barbara
Best Paper Runner-up at INLG 2019 (12th International Conference on Natural Language Generation)
[10]. Learning Cross-Context Entity Representations from Text
https://arxiv.org/abs/2001.03765
Jeffrey Ling, Nicholas FitzGerald, Zifei Shan, Livio Baldini Soares, Thibault Févry, David Weiss, Tom Kwiatkowski
Google Research; † Work done as a Google AI Resident.
[11]. PatentTransformer-2: Controlling Patent Text Generation by Structural Metadata
https://arxiv.org/abs/2001.03708
Jieh-Sheng Lee, Jieh Hsiang
PatentTransformer-; Department of Computer Science and Information Engineering; National Taiwan University, Taiwan
demo paper
[12]. Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks
https://arxiv.org/abs/2001.03632
R. Thomas McCoy, Robert Frank, Tal Linzen
Johns Hopkins University
12 pages, 10 figures; accepted to TACL
[13]. Reformer: The Efficient Transformer
https://arxiv.org/abs/2001.04451
Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya
U.C. Berkeley & Google Research
ICLR 2020
[14]. LP-SparseMAP: Differentiable Relaxed Optimization for Sparse Structured Prediction
https://arxiv.org/abs/2001.04437
Vlad Niculae, André F. T. Martins
Instituto de Telecomunicações
43 pages, 5 tables, 4 figures
[15]. Negative Statements Considered Useful
https://arxiv.org/abs/2001.04425
Hiba Arnaout, Simon Razniewski, Gerhard Weikum
Max Planck Institute for Informatics
10 pages
[16]. Asymmetrical Hierarchical Networks with Attentive Interactions for Interpretable Review-Based Recommendation
https://arxiv.org/abs/2001.04346
Xin Dong, Jingchao Ni, Wei Cheng, Zhengzhang Chen, Bo Zong, Dongjin Song, Yanchi Liu, Haifeng Chen, Gerard de Melo
Rutgers University, NEC Laboratories America
[17]. Shareable Representations for Search Query Understanding
https://arxiv.org/abs/2001.04345
Mukul Kumar, Youna Hu, Will Headden, Rahul Goutam, Heran Lin, Bing Yin
Understanding⋆
[18]. Improving Dysarthric Speech Intelligibility Using Cycle-consistent Adversarial Training
https://arxiv.org/abs/2001.04260
Seung Hee Yang, Minhwa Chung
Interdisciplinary Program in Cognitive Science, Seoul National University; Department of Linguistics, Seoul National University, Republic of Korea
[19]. Structural Decompositions of Epistemic Logic Programs
https://arxiv.org/abs/2001.04219
Markus Hecher, Michael Morak, Stefan Woltran
TU Wien, Vienna, Austria,
[20]. A logic-based relational learning approach to relation extraction: The OntoILPER system
https://arxiv.org/abs/2001.04192
Rinaldo Lima, Bernard Espinasse, Fred Freitas
Departamento de Computação, Universidade Federal Rural de Pernambuco, Recife, Brazil; Aix-Marseille Université, LIS-UMR CNRS, Marseille, France; Centro de Informática, Universidade Federal de Pernambuco, Recife, Brazil
[21]. Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View
https://arxiv.org/abs/2001.03671
Harsh Mehta, Yoav Artzi, Jason Baldridge, Eugene Ie, Piotr Mirowski
Google Research

点赞
收藏
表情
图片
附件