Fenia Christopoulou
@fenchri
Followers
326
Following
625
Media
19
Statuses
110
Member of Engineering (Applied Research) @poolsideai Ex @Huawei / PhD @OfficialUoM @Manchester_NLP / BSc-MSc @ecentua
London, UK
Joined April 2012
✨ Excited to be at @NeurIPSConf in San Diego next week with the team at @poolsideai ⛱️ Booth #913! If you’re building, researching, or thinking about the next wave of AI systems, especially around RL and agentic workflows, I’d love to connect. Ping me if you’re around! ✨
0
0
2
Better late than ever, finally, code is available🙂 💻 https://t.co/bsgZsbq5Oh
Happy to share our work on "Text2Code Generation with Modality-relative Pre-training" w/ @gcsanity & @glampouras_NLP, accepted at #EACL2024! 🎉 We propose to treat code and natural language as different modalities. 📜 https://t.co/fOYsUARpve 💻coming soon (pending int. review)
1
2
15
Research Scientist (permanent) positions just opened in our NLP team in London - Huawei Noah's Ark Lab! Looking for experienced researchers to help us tackle some interesting (and persistent) questions :) Take a look and apply if interested: https://t.co/Q4JajEMFLQ
0
8
16
Delighted to present our #naacl2024 paper, #HumanRankEval: Automatic Evaluation of #LMs as Conversational Assistants with @glampouras_NLP and @iiacobacNLP providing fast, reliable and private evaluation of instruction-tuned #LLMs. A truly new eval paradigm, no #gpt4-as-a-judge :)
4
7
23
Checkout our latest work!
🚀 Excited to share our new pre-print: "Human-like Episodic Memory for Infinite Context LLMs"! We introduce EM-LLM, a novel approach integrating cognitive science insights into LLMs for vastly extended context processing: https://t.co/oTqlQwJ7qV What we did: · 📊 We treat LLMs'
0
0
13
So happy to be back to MCR for this talk! Thanks again for the invitation!
@chenghua_lin @csmcr @UoMSciEng After a short break, Dr Fenia Christopoulou (@fenchri), Research Scientist at @Huawei Noah’s Ark Lab, presents on Natural and Programming Language Models! 🦾 #ADSAI2024 @nactem_unimcr
2
6
18
Looking to grow our NLP team in London - Huawei Noah's Ark Lab! We have positions for Research Scientists (permanents) and Engineers (contractors), where they will conduct academic and applied research in NLP and ML. Details below: https://t.co/dDxE97BgRl
https://t.co/LGN9snl3yb
huaweiuk.teamtailor.com
Abo ut Huawei Research and Development UK Limited Founded in 1987, Huawei is a leading global provider of information and communications technology (ICT) infrastructure and smart devices...
1
5
10
If you are in Malta 🇲🇹 (hi!👋), come join me today at the Generation session (Marie Louise 2 - Radisson Blu Hotel) 14:00 - 15:30 local time, where I will present our work "Text-to-Code Generation with Modality-relative Pre-training"! 🎉
Happy to share our work on "Text2Code Generation with Modality-relative Pre-training" w/ @gcsanity & @glampouras_NLP, accepted at #EACL2024! 🎉 We propose to treat code and natural language as different modalities. 📜 https://t.co/fOYsUARpve 💻coming soon (pending int. review)
1
1
16
If you are attending @eaclmeeting, join #UncertaiNLP workshop! I will give my keynote discussing uncertainty quantification & evaluation and —spoiler alert— conformal prediction. If you are interested in uncertainty (if not, maybe you should :)) #UncertaiNLP is the place to be!
Join us this Friday at the #UncertaiNLP workshop First keynote will be given by Chrysoula Zerva (IST, Lisbon) on "Uncertainty in NLP: Quantification, interpretation and evaluation" Location: Bastion 2 room of the Corinthia Program: https://t.co/P3ewwXfjkF
0
5
57
Many thanks to all the wonderful reviewers, area chairs & senior area chairs that support our reviewing process. Here's a special shout out to the *best reviewers* for October cycle. https://t.co/wbQoMireaV
@ReviewAcl #NLProc
0
6
32
By analysing embeddings of code-related words, we find (as expected) that in the "code" space similar operations come close together but in the "NL" space they are further apart, depending on their semantics (e.g. open-close).
1
0
1
This is realised via 2 stategies: partial separation (only separating embeddings based on programming language keywords) and full separation (separate all embeddings).
1
0
1
We posit a simple idea: Code keywords have strictly defined semantics, hence they need to be treated as a different *modality* from natural language. We start from a pre-trained CodeLM and associate distinct embeddings for each token depending on its modality, focusing on Python.
1
0
1
Happy to share our work on "Text2Code Generation with Modality-relative Pre-training" w/ @gcsanity & @glampouras_NLP, accepted at #EACL2024! 🎉 We propose to treat code and natural language as different modalities. 📜 https://t.co/fOYsUARpve 💻coming soon (pending int. review)
2
3
16
Very happy to be co-organizing this workshop and shared task with the fine folks at @TCD and @AdaptCentre! Please help us spread the word of our research track and shared task: Paper deadline: Dec 18th ARR commitment deadline: Jan 17th Shard task deadline: Jan 20th
Pleased to share that I will be co-organising the SCI-CHAT workshop with the theme "Simulation of Conversational Intelligence in Chat" at EACL 2024. More details at: https://t.co/QdWXBVdfkh
#EACL #AI #ML #NLP
1
5
11
There is a beautiful story that just happened in AI so let me share it for a lighter tone weekend post among all the doom stories in our AI field this week. It’s a story of people on three continents building and sharing in the open a new small efficient and state-of-the-art AI
12
125
549