Pere-Lluís Huguet Cabot
@PereLluisHC
Followers
290
Following
578
Media
10
Statuses
257
Marie-Curie PhD working at @SapienzaNLP, prev. @Babelscape and @knowgraphs. Working on Information Extraction and Italian LLM, projects: REBEL, MinervaLLM, ...
Rome, Italy
Joined January 2021
I am very happy to present the multilingual expansion of REBEL at #ACL2023. This work has been a year-long effort to try and provide datasets and models that enable high-quality multilingual Relation Extraction on par with English.
📢Check out our new #ACL2023 paper! "REDFM: a Multilingual and Filtered Relation Extraction Dataset" 📄 https://t.co/RpLhoZiSGx
@PereLluisHC @SimoneTedeschi_ @NgongaAxel and @RNavigli
#ACL2023NLP #NLProc #RelationExtraction #MultilingualNLP @knowgraphs 🧵1/7
2
9
32
Automatic knowledge graph construction can be slow and expensive. Also I find there's a lack of resources on how to build something principled (do you just stuff text into an LLM prompt?) That's why I love this blog by @tb_tomaz which not only outlines the step-by-uses Relik, a
Learn how to build a knowledge graph without relying on expensive LLMs! 🧠💡 @tb_tomaz from @neo4j shows you how to use Relik, a framework for fast and lightweight information extraction models, to create knowledge graphs. You'll learn: ➡️ How to set up a pipeline for entity
5
64
301
I'm in Bangkok for #ACL2024NLP 🇹🇭 Looking forward to present our work "Dissecting Biases in Relation Extraction: A Cross-dataset analysis on People's Gender and Origin" @genderbiasnlp Joint work w/ @marcostranisci @PereLluisHC @RNavigli 📄 https://t.co/JrG9svJugT
#NLProc 1/2
1
7
36
Check out our new work!
👀 Exciting News! 👀 Happy to announce our latest research paper, “Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget”, will be presented at #ACL2024! 🚀 Try it out! https://t.co/xJzHlNfEzA Thread below 👇
0
2
16
👀 Exciting News! 👀 Happy to announce our latest research paper, “Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget”, will be presented at #ACL2024! 🚀 Try it out! https://t.co/xJzHlNfEzA Thread below 👇
2
12
29
These headlines are 3 days apart. See, they can name the attacker without casting doubt.
328
30K
95K
ARR needs your help! We received 5800+ submissions for the June cycle, but with our current capacity, we can handle only half of these submissions. We can't start the review process without significant help. (1/2)
12
47
84
We are (finally) releasing the 🍷 FineWeb technical report! In it, we detail and explain every processing decision we took, and we also introduce our newest dataset: 📚 FineWeb-Edu, a (web only) subset of FW filtered for high educational content. Link: https://t.co/MRsc8Q5K9q
38
309
1K
The new interpretability paper from Anthropic is totally based. Feels like analyzing an alien life form. If you only read one 90-min-read paper today, it has to be this one https://t.co/9yecogVqJf
29
279
2K
GPU-Poor no more: super excited to officially release ZeroGPU in beta today. Congrats @victormustar & team for the release! In the past few months, the open-source AI community has been thriving. Not only Meta but also Apple, NVIDIA, Bytedance, Snowflake, Databricks, Microsoft,
67
226
1K
Just interviewed by @alessiojacona @Agenzia_Ansa to talk about #LLMs and the @SapienzaNLP effort to create the first #Italian pre-trained #LLMs, the #Minerva family. Team: @edoardo_barba, @ConiaSimone, @perelluisHC, @AndrewWyn1, @RiccardoRicOrl & me
2
8
20
Exciting strides in text summarization with LLMs 🚀but verifying their factual accuracy is still an open challenge 🤔 We introduce FENICE, a factuality-oriented metric for summarization with a strong focus on interpretability🔍 https://t.co/jjEI6lbxzG
#NLProc #LLMs #Factuality
2
10
20
🤯 Think adding nonsense to RAG systems is madness? Our new paper says otherwise! We found that including random documents boost accuracy by 30+%, challenging old paradigms and showing the complexity of integrating retrieval w/ language generation. #RAGSystems #surprisingresults
4
10
56
New Anthropic Paper: Sleeper Agents. We trained LLMs to act secretly malicious. We found that, despite our best efforts at alignment training, deception still slipped through. https://t.co/mIl4aStR1F
119
554
3K
It's been extremely disappointing compared to the experience in ACL venues. In our paper not a single reviewer has acknowledged our answers. Did reviewers receive any reminders to engage @iclr_conf?
0
0
7
I am seriously concerned about the quality and tone of many @iclr_conf reviews I have read in both my reviewer and author batches. I see - Reviews likely written by a language model - Dismissive remarks - Judgemental comments - Impolite tone - ... https://t.co/bbPGdxkZe5
4
7
79
If you are looking for a PhD position on knowledge graphs and machine learning, this might be an opportunity for you 👇
#Job: Open PhD position on #MachineLearning with multiple representations on knowledge graphs. Apply here: https://t.co/xUigu8amXD. Please retweet. #openscience #knowledgegraph @unipb
0
4
9