Edin_CDT_NLP Profile Banner
UKRI CDT in Natural Language Processing Profile
UKRI CDT in Natural Language Processing

@Edin_CDT_NLP

Followers
1K
Following
312
Media
18
Statuses
277

We bring together researchers in NLP, speech, linguistics, cognitive science, and design informatics from across the University of Edinburgh.

Edinburgh
Joined October 2019
Don't wanna be here? Send us removal request.
@Edin_CDT_NLP
UKRI CDT in Natural Language Processing
4 months
🥳🌟Bonus photo-grab on their way to collect well-deserved degrees! Congrats from all @Edin_CDT_NLP to Drs Conklin, Winther, Carter, Hosking and Lindemann as well as, in absentia: Drs Chi and Rubavicius 🎓🙌🥳 @InfAtEd
1
0
8
@Amrkeleg
عمرو قلج Amr Keleg
5 months
TLDR; We are kindly inviting you to participate in our survey—running till the 9th of July—related to Arabizi (sometimes referred to as Arabish or Franco-Arabic): https://t.co/yJ90lFvxMQ @Amrkeleg Ahmed Amine @taha_yssne Imane Guellil @Chadi_Helwe @nedjmaou (1/3) 🧵
4
13
17
@Edin_CDT_NLP
UKRI CDT in Natural Language Processing
5 months
📢 CDTNLP @InfAtEd student Eddie Ungless has created a survey on ethics resources. Anyone in UK who does research with LLMs, in academia or industry, building models or using them as tools, is eligible. Takes <10 mins. Chance to win £100 voucher. Survey:
0
0
1
@Edin_CDT_NLP
UKRI CDT in Natural Language Processing
11 months
🏆🥳🙌Congratulations to (l-r) Dr Burchell @very_laurie, Dr Cardenas and Dr Moghe @nikita_moghe on their recent graduation from the UKRI CDT in NLP @InfAtEd ! All the very best in your respective careers! 🌠 🏆
1
7
31
@tomhosking
Tom Hosking
11 months
This paper is finally published in TACL! Go check it out: https://t.co/x0pgkOnuvB
Tweet card summary image
direct.mit.edu
Abstract. We propose a method for unsupervised abstractive opinion summarization, that combines the attributability and scalability of extractive approaches with the coherence and fluency of Large...
@tomhosking
Tom Hosking
1 year
📣 New paper! 📣 Hierarchical Indexing for Retrieval-Augmented Opinion Summarization (to appear in TACL) Discrete latent variable models are scalable and attributable, but LLMs are wayyy more fluent/coherent. Can we get the best of both worlds? (yes) 👀 https://t.co/kK5OhqTwlw
1
3
20
@black_in_ai
Black in AI
1 year
If you have an MS or PhD in the #AI field, share your expertise, insights, and guidance with the next generation of AI leaders by volunteering with our Emerging Leaders in AI program! 🔗Register here: https://t.co/e8g09xvlrK Questions? Email academic@blackinai.org
0
14
34
@StephanieDroop
Stephanie Droop
1 year
I'm trying to get a feel for what sectors or companies cognitive and computational psych grads work in after PhD. If you are or know someone in that category, please can we have a half hour chat?
0
3
3
@WiMLworkshop
WiML
1 year
Ready to boost your ML/AI career or share your expertise? Join our Women in Machine Learning Mentoring Program! 🤖✨ Sign up now! Get guidance from experts and advance your career. 👩‍🏫 Mentors: Empower the next generation of ML/AI researchers. 🔗 Apply by Sept 15: Register here
0
11
26
@Edin_CDT_NLP
UKRI CDT in Natural Language Processing
1 year
🏆 CONGRATULATIONS @Amrkeleg et al !! 🙌@InfAtEd
@Amrkeleg
عمرو قلج Amr Keleg
1 year
After having a great time presenting our work, receiving an outstanding paper award made it even better. Special thanks to Sharon Goldwater for all her efforts and help, and to @Walid_Magdy for his continuous mentorship! 🎉🎉 📜 https://t.co/bR04ynyseb #ACL2024NLP @Edin_CDT_NLP
0
0
10
@tomhosking
Tom Hosking
1 year
This way, we get the best of both worlds! The encoder/indexer can focus on learning a high quality index, and the LLM deals with the generation. HIRO learns a higher quality embedding space than previous work, AND generates higher quality summaries! See paper for lots more exps
1
1
3
@tomhosking
Tom Hosking
1 year
We propose HIRO - our method learns a discrete hierarchical index over sentences, uses the index to 'retrieve' clusters of sentences containing related and popular opinions, and passes these clusters to an LLM to do the generation work.
1
1
5
@tomhosking
Tom Hosking
1 year
📣 New paper! 📣 Hierarchical Indexing for Retrieval-Augmented Opinion Summarization (to appear in TACL) Discrete latent variable models are scalable and attributable, but LLMs are wayyy more fluent/coherent. Can we get the best of both worlds? (yes) 👀 https://t.co/kK5OhqTwlw
Tweet card summary image
arxiv.org
We propose a method for unsupervised abstractive opinion summarization, that combines the attributability and scalability of extractive approaches with the coherence and fluency of Large Language...
2
10
51
@uililo1
Oli Danyi Liu
1 year
Happy to share that our paper won the @cogsci_soc computational modeling award for perception and action! #CogSci2024 @Edin_CDT_NLP
@uililo1
Oli Danyi Liu
1 year
Paper with @larryniven4, Naomi Feldman, and Sharon Goldwater to appear in CogSci 2024: "A predictive learning model can simulate temporal dynamics and context effects found in neural representations of continuous speech" https://t.co/Nz4y1Rhrsh (1/7)
4
8
55
@Edin_CDT_NLP
UKRI CDT in Natural Language Processing
1 year
The calm before the storm - launch of Joint Conference between UKRI CDTs NLP & @sltcdt! Thanks to @colemanhaley22 , Nick Ferguson + team (NLP & SLT) for organising the two day event - and WELCOME to all SLT students/staff! @InfAtEd
0
5
16
@Guillemram
Guillem Ramírez
2 years
Using GPT-4 but the calls are expensive? Distilling your past queries into a student model may help. Introducing: 'Cache & Distil: Optimising API Calls to Large Language Models'
Tweet card summary image
arxiv.org
Large-scale deployment of generative AI tools often depends on costly API calls to a Large Language Model (LLM) to fulfil user queries. To curtail the frequency of these calls, one can employ a...
1
19
38
@Edin_CDT_NLP
UKRI CDT in Natural Language Processing
2 years
🗣️Well done @very_laurie & co whose research is now being used by Wikimedia to improve their language detection: https://t.co/7Nt9bMEYSa Great outcome! 👏👍 @Edin_CDT_NLP @InfAtEd
0
0
10
@Amrkeleg
عمرو قلج Amr Keleg
2 years
I can not express how grateful I am for having our paper “ALDi: Quantifying the Arabic Level of Dialectness of Text” accepted to #EMNLP2023! Special thanks to my supervisors (@Walid_Magdy and Sharon Goldwater) at the University of Edinburgh @EdinburghNLP @Edin_CDT_NLP 🧵 (1/6)
9
10
120
@jparag123
Parag Jain
2 years
😀Happy to share that work done with advisor @mlapata on "Conversational Semantic Parsing using Dynamic Context Graphs" has been accepted at #EMNLP2023 Main conference. Code and updated paper will be released soon. @EdinburghNLP @Edin_CDT_NLP @emnlpmeeting
1
5
55
@Edin_CDT_NLP
UKRI CDT in Natural Language Processing
2 years
Nice one @p_nawrot! 👏
@p_nawrot
Piotr Nawrot
2 years
No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models ( https://t.co/jHBmI2jlXw) accepted to @NeurIPSConf! I'm very proud of this work : ) Big congrats to @jeankaddour, @oscar__key, @PMinervini , and Matt J. Kusner!
0
1
2