
Miltos Allamanis 🇪🇺
@miltos1
Followers
1K
Following
2K
Media
6
Statuses
240
Researching deep learning for generating and understanding programs. Research Scientist @GoogleAI Also at @[email protected] (Opinions are my own.)
Joined March 2008
RT @RandomlyWalking: Our team has been working hard to harness the power of AI to make software more secure.✨🔐. Today we are excited to sha….
0
124
0
RT @christodorescu: Super exciting work on systematic testing of coding LLMs, with @AshishHoodaCS, @miltos1, Aaron Wilson (Google), and Kas….
openreview.net
Large Language Models' success in text generation has also made them better at code generation and coding tasks. While a lot of work has demonstrated their remarkable performance on tasks such as...
0
4
0
RT @AnsongNi: Excited to share our work at @GoogleDeepMind!. We propose Naturalized Execution Tuning (NExT), a self-training method that dr….
0
124
0
RT @pengchengyin: A fundamental skill of human developers is to mentally simulate and reason about code execution in natural language. Can….
0
15
0
RT @XiaoyuL47181564: Our work “AdaptivePaste: Intelligent Copy-Paste in IDE” will be presented in @FSEconf at 2pm. Kudos to @asvyatko @milt….
0
2
0
RT @BigCodeProject: Introducing the BigCode Evaluation Harness for Code LLMs:. Inspired by the lm-evaluation-harne….
0
40
0
RT @RandomlyWalking: Honoured to receive this award for Most Influential Paper (10-year) for #MSR2023! . Hard to believe it's been a decade….
0
12
0
This is an interesting result! But does "filtering-for-stars" tells us something about the included code quality/quanitity or the benchmark (HumanEval & co)? (probably something about both. ).
In addition to the standard near-deduplication and heuristics pipeline, we ran 4 filtering experiments: GitHub stars, tokenizer fertility, comment-to-code ratio and more near-deduplication. Filtering for GitHub stars hurts performance while comments and near-dedup help!
0
0
7
RT @DishaShrivasta9: Submissions are now open at:
openreview.net
Welcome to the OpenReview homepage for ICLR 2023 Workshop DL4C
0
3
0
RT @DishaShrivasta9: I'm on the job market for industrial research and postdoc positions! My research focusses on developing deep learning….
0
11
0
🐶CoRGi #KDD2022. Often, we forget that graph representations are (lossy) projections of a domain into a graph. In our paper, @jyscardioid et al., we present a simple, yet effective, way to incorporate rich node content into the GNN message-passing. 📄
1
0
10
RT @tscholak: Hi there! I will host the Deep Learning for Code panel at the @DL4Code workshop at @iclr_conf on Friday at 1pm EDT. If you ha….
0
16
0
RT @DL4Code: Don't miss talks from our outstanding lineup of speakers @miltos1 @jacobandreas @gneubig David Choi @MillionInt @xinyun_chen_….
dl4c.github.io
We are pleased to announce that after three successful installations at ICLR'22, '23, and '25, the 4th Deep Learning for Code (DL4C) worshop **Deep Learning For Code in the Agentic Era** is coming to...
0
7
0
RT @eaftandilian: 👋 We're #hiring for GitHub Copilot! We're looking for managers and individual contributors for the IDE and Model Improvem….
0
11
0