
Chenghua Lin
@chenghua_lin
Followers
587
Following
2K
Media
39
Statuses
298
Professor @OfficialUoM | Chair of @siggen_acl Board | affiliated with @manchester_nlp | working on #NLProc, Natural Language Generation
Joined March 2012
Thanks for sharing our paper!.Human evaluators respond differently to the same guidelines, so why assume a single evaluation prompt works for all LLMs? We tackle this by auto generating highly effective, model-specific evaluation prompts using inversion learning. Check it out.
New method for robust and efficient LLM-based NLG evaluation. An inversion learning method that learns effective reverse mappings from model outputs back to their input instructions, enabling the automatic generation of highly effective, model-specific evaluation prompts.
0
3
17
Check out our SciProdLLM workshop at #AACL2025 #AI4Science.
🚨Call for Papers. SciProdLLM 2025 explores human-centered use of LLMs in science, from idea generation to peer review.Submit your work on LLM-assisted research workflows, evaluation, and co-authorship. Deadline: Sept 29.#AI4Science #AACL2025 @aaclmeeting
0
0
7
RT @Manchester_NLP: Delighted to share the #ACL2025 and TACL papers (10 in total) from the @manchester_nlp group! Come chat with our staff….
0
8
0
Check out Tyler's work on tongue twister generation, published as a long paper in the Computational Linguistics Journal.
Happy to announce our journal paper on tongue twisters, Train and Constrain (TwistList 2.0), has now been officially published in @CompLingJournal! (Thanks to @chenghua_lin and Chen Tang) @sltcdt #nlp #nlproc #nlg.
0
0
8
🎶 Don’t miss our Workshop on LLMs for Music & Audio (LLM4Music) at #ISMIR2025 in Korea 🇰🇷! Check out the submission details below👇.
🎶📢 Excited to announce the 1st Workshop on LLMs for Music & Audio (LLM4Music) at #ISMIR2025!. 📍 KAIST, Daejeon, Korea.🗓️ Sept 26, 2025.🧠 Exploring LLMs for music, audio, & multimodal creativity.📝 Submit by Aug 10.🔗 Info: #AI #MusicTech #LLM4MA
0
2
13
Another proud moment: Shun (@SWangMB) has successfully defended his thesis “Interpretable Computational Metaphor Processing”! He produced an impressive body of work in *ACL during his PhD. Big thanks to co-supervisor Po Yang, and to panel members Rob Gaizauskas and Scott Piao🍻🥳
1
3
18
RT @hanhua_hong: [1/n] Can we generate highly effective, model-specific prompts via inversion learning? Delighted to introduce our new pape….
huggingface.co
0
2
0
RT @Manchester_NLP: If you are attending #NAACL2025 in Albuquerque, check out the papers from @manchester_nlp !🤠🍻🍹
0
3
0
RT @siweiwu7: [1/n] Delighted to share our new work "COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with H….
0
10
0
Very proud that @TomasGoldsack passed his PhD viva with minor corrections today! 🎉 His PhD focuses on generating lay summaries of complex content for non-expert audiences. Many thanks to @carolscarton (co-supervisor) and to the panel members @dianamaynard and @stuart_e_middle!
0
3
20
Check out our work on ContrastScore: a new contrastive metric that evaluates generated text by leveraging structured disagreement between two language models of different capacities -- not only achieving more accurate and robust evaluation, but also greater efficiency.
1/n Delighted to share the release of our new preprint "ContrastScore: Towards Higher Quality, Less Biased, More Efficient Evaluation Metrics with Contrastive Evaluation".📄Paper: 💻Code:
0
0
9
An absolute pleasure to host Prof. Neil Lawrence (@lawrennd) for a fascinating (and mind-bending) departmental seminar on “Information Engines: Exploring Connections Between Intelligence and Thermodynamics” — where entropy meets intelligence
0
2
11
RT @EhudReiter: A preprint of my book Natural Language Generation is now available on Arxiv (. Use the PDF, the HTM….
ehudreiter.com
These pages give information and supporting material for my new book Natural Language Generation. I hope they become useful resources in their own right; for example, someone who is interested in e…
0
7
0
RT @SheffieldNLP: 🚀 Reminder! 🚀. The deadline for the Postdoc in Uncertainty Quantification for Foundation Models at @sheffieldNLP with @Na….
0
4
0
With over 80 participating teams in the 1st and 2nd editions of our shared task, we are now introducing an additional *multi-modal* lay summarisation task in the 3rd edition for generating lay radiology reports. Check it out!.
Introducing the 3rd edition of BioLaySumm shared task hosted at the BioNLP Workshop @ #ACL2025NLP ! BioLaySumm is a shared task that focuses on generating easy-to-understand lay summaries for complex biomedical texts. Building on the success of the first
0
0
7
NLG coming to Vietnam and Southeast Asia for the very first time – really looking forward to the event 🎉🇻🇳.
Very excited to announce that #INLG2025 will be in Hanoi, Vietnam🇻🇳 from Oct 29 to Nov 2, 2025, co-organized by Vietnam National University (VNU), Vietnam Advanced Institute of Mathematics (VAIM), and the Association for Vietnamese Language and Speech Processing (VLSP).
1
2
16
RT @bohao_yang: 1/n Excited to announce our new paper "Does Table Source Matter? Improving Scientific Multimodal Table Understanding and Re….
0
5
0