liamhparker Profile Banner
Liam Profile
Liam

@liamhparker

Followers
129
Following
41
Media
10
Statuses
42

PhD student in theoretical physics @UCBerkeley supported by NSF GRFP and Researcher @PolymathicAI. Previously @Princeton.

New York, NY
Joined July 2020
Don't wanna be here? Send us removal request.
@liamhparker
Liam
9 months
RT @MilesCranmer: 🧵 Could this be the ImageNet moment for scientific AI?. Today with @PolymathicAI and others we're releasing two massive….
0
87
0
@liamhparker
Liam
11 months
RT @cosmo_shirley: Our internship program at Polymathic is open for opportunities from now through fall 2025! . I believe our program provi….
0
45
0
@grok
Grok
4 days
Join millions who have switched to Grok.
170
330
3K
@liamhparker
Liam
1 year
RT @cosmo_shirley: You heard all about AI accelerating simulations (maybe from me?), but do you know. How can AI tell you what is in the….
0
52
0
@liamhparker
Liam
1 year
RT @ChanghoonHahn: Excited to announce that our latest #SimBIG collaboration research has just been published in @NatureAstronomy 🔭✨!.#Astr….
0
7
0
@liamhparker
Liam
1 year
Check out our recent work on simulation-based inference in galaxy clustering!.
@NatureAstronomy
Nature Astronomy
1 year
By extracting non-Gaussian cosmological information on galaxy clustering at non-linear scales, a framework for cosmic inference (SimBIG) provides precise constraints for testing cosmological models. @ChanghoonHahn @cosmo_shirley @DavidSpergel et al.:
Tweet media one
0
0
10
@liamhparker
Liam
1 year
RT @SiavashGolkar: SOTA models often use bidirectional transformers for non-NLP tasks but did you know causal transformers can outperform t….
0
11
0
@liamhparker
Liam
1 year
RT @MilesCranmer: Some exciting @PolymathicAI news. We're expanding!!. New Research Software Engineer positions opening in Cambridge UK,….
0
29
0
@liamhparker
Liam
1 year
RT @oharub: 🎉 Excited to introduce Gibbs Diffusion (GDiff), a new Bayesian blind denoising method with applications in image denoising and….
0
29
0
@liamhparker
Liam
1 year
RT @YulingYao: We have been using neural posterior/simulation-based inference(SBI) for scientific computing. There was one hole: you run 10….
0
7
0
@liamhparker
Liam
1 year
9/ This is joint work with my teammates @PolymathicAI, especially with Francois Lanusse, @SiavashGolkar, @LeopoldoSarra, @MilesCranmer, @cosmo_shirley, and many others!.
1
0
4
@liamhparker
Liam
1 year
8/ The road for these advancements was paved by a lot of exciting work done in SSL for galaxies, especially by @GeorgeStein, @peter_melchior, and @mike_walmsley_.
1
0
4
@liamhparker
Liam
1 year
7/ Ultimately, AstroCLIP’s cross-modal contrastive pre-training produces a high-quality foundation model for galaxies, capable of performing downstream tasks without fine-tuning. We make available the model and code, enabling the community to build higher-level applications.
1
0
2
@liamhparker
Liam
1 year
6/ Morphology classification is another key task for understanding galaxy formation and evolution. AstroCLIP embeddings are informative of morphology and outperform embeddings produced by previous single-modal self-supervised models for galaxies.
Tweet media one
1
0
4
@liamhparker
Liam
1 year
5/ AstroCLIP isn't just about redshift estimation. It also effectively embeds information on key galaxy properties like Star Formation Rate, Metallicity, Galaxy Age, and Stellar Mass using simple regression tools. Once again, k-NN outperforms dedicated supervised models.
Tweet media one
1
0
4
@liamhparker
Liam
1 year
4/ We evaluated AstroCLIP's performance on photometric redshift estimation, a key inference task in astrophysics. Unlike traditional methods requiring specialized CNNs, AstroCLIP's embeddings are representative enough of the data that even k-NN achieves superior results.
Tweet media one
1
0
6
@liamhparker
Liam
1 year
3/ Our updated embedding scheme still aligns galaxy representations across modalities based on shared semantics, providing well-aligned retrievals. Check out below some examples of retrieval, or go to the web app linked above.
Tweet media one
1
1
5
@liamhparker
Liam
1 year
2/ The newest AstroCLIP model benefits from a vision transformer backbone trained at scale (300M param.) on the DINOv2 framework, and can now match or outperform dedicated, supervised deep learning models on many downstream tasks without any additional finetuning or training.
1
0
4
@liamhparker
Liam
1 year
1/ AstroCLIP embeds and aligns both galaxy images and optical spectra into a single, physically-meaningful embedding space. AstroCLIP embeddings are sufficiently informative that even simple algorithms like k-NN can be used for a variety of accurate downstream applications.
Tweet media one
1
1
3