Wen Zhang
@wencolani
Followers
38
Following
26
Media
0
Statuses
18
Asistant professor at Zhejiang University. My research interests includes knowledge graph, graph computing, and knowledge reasoning.
China
Joined November 2018
David Silver really hits it out of the park in this podcast. The paper "Welcome to the Era of Experience" is here: https://t.co/Y6m4jLRjnh.
Human generated data has fueled incredible AI progress, but what comes next? ๐ On the latest episode of our podcast, @FryRsquared and David Silver, VP of Reinforcement Learning, talk about how we could move from the era of relying on human data to one where AI could learn for
20
180
1K
๐ Introducing ReCall, learning to Reason with Tool Call via RL. - Multi-turn Reinforcement Learning - No need for supervised data on tool use or reasoning steps - Empowers LLMs to agentically use and combine arbitrary tools Fully open-source! A work in progress and we are
1
52
207
๐Introducing ๐ฅ๐ฒ๐ฆ๐ฒ๐ฎ๐ฟ๐ฐ๐ต: Learning to Reason with Search for LLMs via Reinforcement Learning. An open-source project that combines ๐ฅ๐ and ๐ฅ๐๐ for LLMs! ๐กLike Deepseek-R1-Zero and Deep Research, we start with pretrained models and use RL to empower them with the
5
55
348
We are thinking about the possibility of synthesize instruction data for finetuning LLMs. In this #EMNLP2024 Findings work. We utilize the complex graph patterns in KGs to automatically generate plan of a question, and utilize the planning data to finetune LLMs. This works well.
How can we improve LLMs' step-wise reasoning and planning ability? Our #EMNLP2024 paper proposes a framework that, echoing O1's multi-step reasoning, enhances LLMs by leveraging knowledge graphs (KGs) to synthesize step-by-step instructions. Just as chain-of-thought reasoning
0
0
1
I like this work pretty much. We are trying to explore realistic settings for automatic knowledge graph completion. We also tried to use LLM for the Triple Set Prediction (TSP) task. Empirical study results show that TSP is not an easy task for LLM. See
Start from Zero: Triple Set Prediction for Automatic Knowledge Graph Completion (KGC): In our #TKDE paper, we redefine the KGC task by introducing "Triple Set Level" completion. Unlike traditional methods that predict missing elements at the single-triple level, our approach
0
1
3
Please check our work that will be published at NLPCC 2023 for more discussion: MACO: A Modality Adversarial and Contrastive Framework for Modality-missing Multi-modal Knowledge Graph Completion https://t.co/5aNmm81dCu
We always find that some of the modal data are missing in multi-modal KGs. The missing modality information undermine the modelโs performances during completion. We find generating missing modality features and a cross-modal contrastive loss helps.
0
0
1
We always find that some of the modal data are missing in multi-modal KGs. The missing modality information undermine the modelโs performances during completion. We find generating missing modality features and a cross-modal contrastive loss helps.
0
0
2
#IJCAI2023 our comprehensive survey paper on "Knowledge Extrapolation", the capability of handling unseen entities or new relations in KGs. "Generalizing to Unseen Elements: A Survey on Knowledge Extrapolation for Knowledge Graphs" https://t.co/t96E445Ooo
0
1
3
Looking forward to meet you at #IJCAI2023 and welcome to our tutorial - the 2nd edition of K-ZSL tutorial. Check ๐for details.
#IJCAI2023 #Tutorials The website of our 2nd edit of K-ZSL tutorial (Knowledge-aware Zero-shot Learning) is here: https://t.co/xe2bGx6qG6. Presenters: @GengYuxia @ZhuoCs me @wencolani @jpansw. #KnowledgeGraph #ZeroShotLearning. Looking forwarding to me you ๐ฅฐ
0
1
3
Do you want to know how to pre-train a knowledge graph model on a KG and apply it on other tasks supported by different KGs in a uniform way? Check our work KGTransformer accepted by #TheWebConf 2023. https://t.co/ohVLlCAQ0z
arxiv.org
Knowledge graphs (KG) are essential background knowledge providers in many tasks. When designing models for KG-related tasks, one of the key tasks is to devise the Knowledge Representation and...
0
0
1
Our work "Entity-Agnostic Representation Learning for Parameter-Efficient Knowledge Graph Embedding" accepted by #AAAI23 is available online ;)
1
0
0
github.com
[Paper][AAAI2023] Analogical Inference Enhanced Knowledge Graph Embedding - zjukg/AnKGE
Our paper titled "Analogical Inference Enhanced Knowledge Graph Embedding" accepted by #AAAI23 is available online. In this work, we propose AnKGE, an enhanced KGE framework that enable KGEs with analogical inference capability. Check our paper ๐ https://t.co/H019HFIqV8
0
0
1
We developed a toolkit for diverse representation learning of knowledge graphs, called #NeuralKG. It includes diverse Conventional KGEs, GNN-based KGEs, and Rule-based KGEs. Yesterday we added a recently proposed GNN-KGE method SE-GNN. Check it on github
github.com
[Tool] For Knowledge Graph Representation Learning - zjukg/NeuralKG
0
0
1
Our paper titled "Analogical Inference Enhanced Knowledge Graph Embedding" accepted by #AAAI23 is available online. In this work, we propose AnKGE, an enhanced KGE framework that enable KGEs with analogical inference capability. Check our paper ๐ https://t.co/H019HFIqV8
0
0
2
Our KG-based ZSL work "Disentangled Ontology Embedding for Zero-shot" https://t.co/ZpfpXHnggk accepted by KDD'22, by @GengYuxia @ChenJiaoyan1 @wencolani
@ZhuoChe56641253 @jpansw
@ChenHuajun etc. #KnowledgeGraph
@kdd_news
#KDD2022
1
3
13
~2 days to submit an idea for Hybrid 21st #iswc_conf #iswc2022 Workshops & Tutorials! This could be either sharing a new technology or having great minds come together for intense scientific exchange on a specific topic in the field! Conference Website: https://t.co/v5ntpVAPqU
0
5
6
Join us and share your research with the community through the track that fits best for your work! Joint CF Resource Track, In-Use Track, Research Track Papers #iswc_conf #iswc_2022 Conference Website: https://t.co/v5ntpVAPqU
0
14
15
A new survey and perspective paper on "Knowledge Graph Reasoning with Logics and Embeddings: Survey and Perspective", by @wencolani @ChenJiaoyan1 @jpansw @ChenHuajun etc. https://t.co/iWNzkk1niY
#KnowledgeGraph #NeuralSymbolic #AI #DeepLearning #Reasoning
3
10
31