sillinuss Profile Banner
Linus Ericsson Profile
Linus Ericsson

@sillinuss

Followers
144
Following
88
Media
5
Statuses
37

Postdoc researcher at the University of Edinburgh

Edinburgh, Scotland
Joined October 2013
Don't wanna be here? Send us removal request.
@sillinuss
Linus Ericsson
2 years
Check out our new work on NAS search spaces! We have created einspace, a new expressive search space that unifies many architectural families, including convnets, transformers and mlp-only architectures. paper: https://t.co/OINqiaYo8y project page: https://t.co/hEvDjIKhsf
1
19
41
@sillinuss
Linus Ericsson
2 years
Check out our new work on NAS search spaces! We have created einspace, a new expressive search space that unifies many architectural families, including convnets, transformers and mlp-only architectures. paper: https://t.co/OINqiaYo8y project page: https://t.co/hEvDjIKhsf
1
19
41
@sillinuss
Linus Ericsson
1 year
0
0
0
@sillinuss
Linus Ericsson
2 years
Thanks to Elliot, Steven and Antreas in particular for forming the early stages of the idea and Shay for help formulating the CFG!
0
0
2
@sillinuss
Linus Ericsson
2 years
This work was done at the University of Edinburgh with collaborators Miguel Espinosa, @CYang51746374, @AntreasAntonio, @AmosStorkey, Shay B. Cohen, @steve_mcdonagh and Elliot J. Crowley. @SchoolOfEng_UoE , @InfAtEd
@sillinuss
Linus Ericsson
2 years
Check out our new work on NAS search spaces! We have created einspace, a new expressive search space that unifies many architectural families, including convnets, transformers and mlp-only architectures. paper: https://t.co/OINqiaYo8y project page: https://t.co/hEvDjIKhsf
1
2
13
@sillinuss
Linus Ericsson
2 years
In our experiments we find new cool architectures as well as improvements on existing ones across a range of diverse datasets from the Unseen NAS suite. Here is an example of the wild (and high-performing) things einspace can generate. code: https://t.co/L91ikzuPn4
0
0
3
@sillinuss
Linus Ericsson
2 years
Our space is based on a parameterised probabilistic context-free grammar. Through its recursively defined production rules, it's able to generate architectures that are arbitrarily wide and deep, containing branching and routing operations in addition to parameterised layers.
1
0
4
@RamanDutt4
Raman Dutt
2 years
🚨 MemControl: Mitigating Memorization in Medical Diffusion Models via Automated Parameter Selection A new strategy to mitigate memorization in Diffusion models Arxiv: https://t.co/EIGrSOk8DL Work done with @SnchzPedro_ @OBohdal @STsaftaris @tmh31 @BioMedAI_CDT 🧵👇
1
7
19
@automl_conf
AutoML_conf
2 years
I'm happy to announce the first competition of the #AutoML conference 2024 ( https://t.co/WNvxJORDVX). Neural architecture search #NAS is one of the most well-known fields of AutoML and has enabled fantastic achievements in the last few years.
1
3
10
@RamanDutt4
Raman Dutt
2 years
🚨FairTune: Optimizing PEFT for Fairness in Medical Image Analysis A new framework to finetune your large vision models that improves downstream fairness. Accepted in #ICLR2024 ✨ With: @OBohdal @STsaftaris @tmh31 CC: @vivnat @alvarezvalle @fepegar_ @BoWang87 @BioMedAI_CDT
1
20
83
@tmh31
Timothy Hospedales
2 years
Excited to have been part of DemoFusion, bringing UHD generation to SDXL on your desktop with no training! With @RuoyiDu @yizhe_song @DL_Chang Project: https://t.co/ygOnvdSJtN, paper: https://t.co/OR77mOvHjD #GenerativeAI
4
7
27
@tomsherborne
Tom Sherborne
2 years
🚨New TACL paper 🚨 Can we use explicit latent variable alignment for cross-lingual transfer? Minotaur uses optimal transport for explicit alignment across languages. w/ @tomhosking, Mirella Lapata. To be presented @emnlpmeeting + @mrl2023 in-person 🇸🇬 https://t.co/PdoDPEIP97
1
14
57
@CianEastwood
Cian Eastwood
2 years
📢New work on Self-Supervised Representation Learning📢 🎯Rather than using data augmentations to induce invariance, we use them to separate/disentangle. ➡️Improves performance* on downstream tasks, since one task's "style" may be another's "content". https://t.co/MIOH1QcHYX
2
20
87
@sillinuss
Linus Ericsson
2 years
Is domain adaptation ready for the use of automatic machine learning methods? I’m presenting our paper “Better Practices for Domain Adaptation” in the @automl_conf in poster sessions today 16:00 and tomorrow 10:45, and giving a best paper talk today at 17:30! #AutoML2023
2
3
20
@sighellan
Sigrid Passano Hellan
2 years
Finding yourself repeatedly training your ML model because you’ve collected more data? Not sure what to do with the hyperparameters? We investigated for you! Check out the paper https://t.co/D4AOSdK1g4 with the wonderful Huibin Shen, @FrancoisAubet, David Salinas and @kleiaaro
1
7
23
@sighellan
Sigrid Passano Hellan
2 years
Interested in Bayesian optimisation and how it can be applied against climate change? Check out our new short survey. We look at material discovery, wind farm layout planning, optimal renewable control and environmental monitoring. For each we suggest a benchmark to get started.
2
5
12
@RamanDutt4
Raman Dutt
3 years
🚨Parameter Efficient Fine-Tuning has been well researched for NLP, vision, and cross-modal tasks. So why should MedAI be left behind? Presenting the first evaluation on PEFT for medical AI - https://t.co/tYIh7DLNN7 16 PEFT methods, 5 datasets including a text-to-image task 🔥
3
11
40
@sighellan
Sigrid Passano Hellan
3 years
Just one week till our @NeurIPSConf social on ML and climate change! We’ll have icebreaker bingo, roundtable discussions and a friendly atmosphere.
1
2
7