[:bell: News! :bell: ] We have released a new survey paper:"Generative Knowledge Graph Construction: A Review" based on this repository, with a perspective of existing Generative Knowledge Graph Construction! We are looking forward to any comments or discussions on this topic :)
Generative Knowledge Graph Construction (KGC) refers to those methods that leverage the sequence-to-sequence framework for building knowledge graphs, which is flexible and can be adapted to widespread tasks. In this study, we summarize the recent compelling progress in generative knowledge graph construction. We present the advantages and weaknesses of each paradigm in terms of different generation targets and provide theoretical insight and empirical analysis. Based on the review, we suggest promising research directions for the future. Our contributions are threefold: (1) We present a detailed, complete taxonomy for the generative KGC methods; (2) We provide a theoretical and empirical analysis of the generative KGC methods; (3) We propose several research directions that can be developed in the future. For more resources about knowledge graph construction, please check our tookit DeepKE and PromptKG.
- Congratulations! Our work has been accepted by the EMNLP2022 main conference.
- Due to the rise of generative extraction methods in the NLP community,we summarize recent progress in generative KGC and release our paper on arivx.
If you find this survey useful for your research, please consider citing
@misc{https://doi.org/10.48550/arxiv.2210.12714,
doi = {10.48550/ARXIV.2210.12714},
url = {https://arxiv.org/abs/2210.12714},
author = {Ye, Hongbin and Zhang, Ningyu and Chen, Hui and Chen, Huajun},
title = {Generative Knowledge Graph Construction: A Review},
publisher = {arXiv},
year = {2022},
}
Knowledge Graph Construction mainly aims to extract structural information from unstructured texts, such as Named Entity Recognition (NER), Relation Extraction (RE), Event Extraction (EE), Entity Linking (EL), and Knowledge Graph Completion (KGC).
Generally, KGC can be regarded as structure prediction tasks, where a model is trained to approximate a target function
-
Named Entity Recognition aims to identify the types of entities, i.e., ‘Steve Job', ‘Steve Wozniak'
$\Rightarrow$ PERSON, ‘Apple'$\Rightarrow$ ORG; -
Relation Extraction aims to identify the relationship of the given entity pair
$\langle$ Steve Job, Apple$\rangle$ as founder; -
Event Extraction aims to identify the event type as Business Start-Org where ‘co-founded' triggers the event and (Steve Jobs, Steve Wozniak) are participants in the event as AGENT and Apple as ORG respectively.
-
Entity Linking aims to link the mention Steve Job to Steven Jobs (Q19837) on Wikidata, and Apple to Apple (Q312) as well.
-
Knowledge Graph Completion aims to complete incomplete triples
$\langle$ Steve Job, create, ?$\rangle$ for blank entities Apple, NeXT Inc. and Pixar.
In this Survey, we summarize recent progress in generative KGC. We propose to organize relevant work by the generation target of models and also present the axis of the task level.
This paradigm refers to developing more robust models to copy the corresponding entity directly from the input sentence during the generation process. As shown in figure, the model copies the head entity from the input sentence and then the tail entity.
-
Directly copy entity
-
"Extracting Relational Facts by an End-to-End Neural Model with Copy Mechanism", ACL 2018
- Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, Jun Zhao
- [Paper]
-
"Learning the Extraction Order of Multiple Relational Facts in a Sentence with Reinforcement Learning", AAAI 2020
- Xiangrong Zeng, Shizhu He, Daojian Zeng, Kang Liu, Shengping Liu, Jun Zhao
- [Paper]
-
"CopyMTL: Copy Mechanism for Joint Extraction of Entities and Relations with Multi-Task Learning", EMNLP 2019
- Daojian Zeng, Haoran Zhang, Qianying Liu
- [Paper]
-
"Document-level Entity-based Extraction as Template Generation", EMNLP 2021
- Kung-Hsiang Huang, Sam Tang, Nanyun Peng
- [Paper]
-
-
Restricted target vocabulary
- "A sequence-to-sequence approach for document-level relation extraction", BioNLP 2022
- John Giorgi, Gary Bader, Bo Wang
- [Paper]
- "A sequence-to-sequence approach for document-level relation extraction", BioNLP 2022
This paradigm refers to utilizing structural knowledge and label semantics, making it prone to handling a unified output format. As shown in figure, the output is a linearization of the extracted knowledge structure.
-
Per-token tag encoding
-
Faithful contrastive learning
-
"Contrastive Triple Extraction with Generative Transformer", AAAI 2021
- Hongbin Ye, Ningyu Zhang, Shumin Deng, Mosha Chen, Chuanqi Tan, Fei Huang, Huajun Chen
- [Paper]
-
"Contrastive Triple Extraction with Generative Transformer", IEEE ACM Trans. Audio Speech Lang. Process
- Ningyu Zhang, Hongbin Ye, Shumin Deng, Chuanqi Tan, Mosha Chen, Songfang Huang, Fei Huang, Huajun Chen
- [Paper]
-
"Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning", ACL 2022
- Swarnadeep Saha, Prateek Yadav, Mohit Bansal
- [Paper]
-
-
Prefix tree constraint decoding
- "Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction", ACL 2021
- Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, Shaoyi Chen
- [Paper]
- "Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction", ACL 2021
-
Triplet linearization
-
Entity-aware hierarchical decoding
- "From Discrimination to Generation: Knowledge Graph Completion with Generative Transformer", WWW 2022
- Xin Xie, Ningyu Zhang, Zhoubo Li, Shumin Deng, Hui Chen, Feiyu Xiong, Mosha Chen, Huajun Chen
- [Paper]
- "From Discrimination to Generation: Knowledge Graph Completion with Generative Transformer", WWW 2022
-
Unified structure generation
-
"Unified Structure Generation for Universal Information Extraction", ACL 2022
- Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, Hua Wu
- [Paper]
-
"DeepStruct: Pretraining of Language Models for Structure Prediction", ACL 2022
- Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, Dawn Song
- [Paper]
-
-
Reformulating triple prediction
-
"Intent Classification and Slot Filling for Privacy Policies", ACL 2021
- Wasi Uddin Ahmad, Jianfeng Chi, Tu Le, Thomas Norton, Yuan Tian, Kai-Wei Chang
- [Paper]
-
"HySPA: Hybrid Span Generation for Scalable Text-to-Graph Extraction", ACL 2021
- Liliang Ren, Chenkai Sun, Heng Ji, Julia Hockenmaier
- [Paper]
-
"SQUIRE: A Sequence-to-sequence Framework for Multi-hop Knowledge Graph Reasoning", EMNLP 2022
- Yushi Bai, Xin Lv, Juanzi Li, Lei Hou, Yincen Qu, Zelin Dai, Feiyu Xiong
- [Paper]
-
-
Query Verbalization
-
"Improving Candidate Retrieval with Entity Profile Generation for Wikidata Entity Linking", ACL 2022
- Tuan Lai, Heng Ji, ChengXiang Zhai
- [Paper]
-
"Sequence-to-Sequence Knowledge Graph Completion and Question Answering", ACL 2022
- Apoorv Saxena, Adrian Kochsiek, Rainer Gemulla
- [Paper]
-
"Knowledge Is Flat: A Seq2Seq Generative Framework for Various Knowledge Graph Completion", COLING 2022
- Chen Chen, Yufei Wang, Bing Li, Kwok-Yan Lam
- [Paper]
-
This paradigm refers to utilizing the extra markers to indicate specific entities or relationships. As shown in figure, the output sequence copies all words in the input sentence, as it helps to reduce ambiguity. In addition, this paradigm uses square brackets or other identifiers to specify the tagging sequence for the entity of interest. The relevant labels are separated by the separator "$|$" within enclosed brackets. Meanwhile, the labeled words are described with natural words so that the potential knowledge of the pre-trained model can be leveraged.
-
Augmented natural language
-
"Augmented Natural Language for Generative Sequence Labeling", EMNLP 2020
- Ben Athiwaratkun, Cicero Nogueira dos Santos, Jason Krone, Bing Xiang
- [Paper]
-
"Autoregressive Entity Retrieval ", ICLR 2021
- Nicola De Cao, Gautier Izacard, Sebastian Riedel, Fabio Petroni
- [Paper]
-
"Structured Prediction as Translation between Augmented Natural Languages ", ICLR 2021
- Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, RISHITA ANUBHAI, Cicero Nogueira dos Santos, Bing Xiang, Stefano Soatto
- [Paper]
-
"Autoregressive Structured Prediction with Language Models", EMNLP 2022
- Tianyu Liu, Yuchen Jiang, Nicholas Monath, Ryan Cotterell, Mrinmaya Sachan
- [Paper]
-
This paradigm generates the indices of the words in the input text of interest directly, and encodes class labels as label indices. As the output is strictly restricted, it will not generate indices that corresponding entities do not exist in the input text, except for relation labels.
-
Pointer mechanism
-
"Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction Authors", AAAI 2020
- Tapas Nayak, Hwee Tou Ng
- [Paper]
-
"Don’t Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing", WWW 2020
- Subendhu Rongali, Luca Soldaini, Emilio Monti, Wael Hamza
- [Paper]
-
"A Unified Generative Framework for Various NER Subtasks", ACL 2021
- Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, Xipeng Qiu
- [Paper]
-
"A Unified Generative Framework for Aspect-based Sentiment Analysis", ACL 2021
- Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, Zheng Zhang
- [Paper]
-
-
Pointer selection
- "GRIT: Generative Role-filler Transformers for Document-level Event Entity Extraction", EACL 2021
- Xinya Du, Alexander Rush, Claire Cardie
- [Paper]
- "GRIT: Generative Role-filler Transformers for Document-level Event Entity Extraction", EACL 2021
This paradigm refers to utilizing templates to define the appropriate order and relationship for the generated spans. As shown in figure, the template refers to a text describing an event type, which adds blank argument role placeholders. The output sequences are sentences where the blank placeholders are replaced by specific event arguments.
-
Template filling as generation
-
"COMET: Commonsense Transformers for Automatic Knowledge Graph Construction", ACL 2019
- Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi
- [Paper]
-
"Document-Level Event Argument Extraction by Conditional Generation", NAACL 2021
- Sha Li, Heng Ji, Jiawei Han
- [Paper]
-
"Template Filling with Generative Transformers", NAACL 2021
- Xinya Du, Alexander Rush, Claire Cardie
- [Paper]
-
"ClarET: Pre-training a Correlation-Aware Context-To-Event Transformer for Event-Centric Generation and Classification", ACL 2022
- Yucheng Zhou, Tao Shen, Xiubo Geng, Guodong Long, Daxin Jiang
- [Paper]
-
-
Prompt semantic guidance
-
"DEGREE: A Data-Efficient Generation-Based Event Extraction Model", NAACL 2022
- I-Hung Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Prem Natarajan, Kai-Wei Chang, Nanyun Peng
- [Paper]
-
"Dynamic Prefix-Tuning for Generative Template-based Event Extraction", ACL 2022
- Xiao Liu, Heyan Huang, Ge Shi, Bo Wang
- [Paper]
-
"Prompt for Extraction? PAIE: Prompting Argument Interaction for Event Argument Extraction", ACL 2022
- Yubo Ma, Zehao Wang, Yixin Cao, Mukai Li, Meiqi Chen, Kun Wang, Jing Shao
- [Paper]
-
-
Language-agnostic template
- "Multilingual Generative Language Models for Zero-Shot Cross-Lingual Event Argument Extraction", ACL 2022
- Kuan-Hao Huang, I-Hung Hsu, Prem Natarajan, Kai-Wei Chang, Nanyun Peng
- [Paper]
- "Multilingual Generative Language Models for Zero-Shot Cross-Lingual Event Argument Extraction", ACL 2022
The time for each paper is based on its first arXiv version (if exists) or estimated submission time.
If you find this repository useful to your research or work, it is really appreciate to star this repository.