Max Callaghan Profile
Max Callaghan

@MaxCallaghan5

Followers
327
Following
258
Media
28
Statuses
184

Postdoctoral researcher @PIK_Climate on natural language processing and climate science.

Joined February 2020
Don't wanna be here? Send us removal request.
@MaxCallaghan5
Max Callaghan
5 years
Our work using machine learning to map out a "topography" of climate change research, with Jan Minx and @piersforster .
Tweet media one
0
2
16
@MaxCallaghan5
Max Callaghan
5 months
RT @PIK_Climate: 🆕 PIK-led research work in @ClimateActionSN generates the map of research on #ClimatePolicy from 85,000 individual studies….
0
4
0
@MaxCallaghan5
Max Callaghan
6 months
Incidentally, if you are interested in working with us on how we can responsibly use ML to assist evidence synthesis, we have an open position at PIK. Today is the last day the position is open, but please get in touch ASAP if you need extra time to apply.
0
0
0
@MaxCallaghan5
Max Callaghan
6 months
There is a lot more to do on improving and evaluating stopping criteria. We set out a blueprint for some of this in the paper, but it requires lots of engagement and further work, some of which is already happening - e.g. in DESTINY @wellcometrust.
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
To help users navigate this landscape, we argue that organisations like @cochranecollab and @CampbellReviews need to update their methodological guidance to help users distinguish between well-justified and ill-justified stopping criteria 12/N.
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
We also argue that software providers (@Covidence @EPPIReviewer @asreviewlab @rayyanapp @evidencepartner @PICO_Portal) need to provide better guidance on how their ML-prioritisation tools can be used responsibly (i.e. with appropriate stopping criteria). 11/N.
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
In the paper, we argue that we should prefer the former type of criteria to the latter. I don't think this should be controversial, but again and again when I have argued this, I have met resistance. If you disagree, please tell me why you think we don't need statistics here!10/N.
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
Other stopping criteria do not do this, but rely on heuristics, like stopping after 50/100/200 consecutive irrelevant records. 9/N.
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
Some stopping criteria make transparent assumptions and use appropriate statistics to communicate the risk of missing relevant studies (like the one we developed 4 years ago other promising alternatives are available :)) 8/N.
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
This is unrelated to how fancy our model is. Whenever we use an ML-generated prediction, we need ways to manage and communicate the uncertainty that comes with relying on that prediction. This is a *necessary condition* for the *responsible* use of AI/ML 7/N.
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
Stopping criteria offer ways to *estimate* an appropriate time to stop screening, managing the risk of missing relevant studies while hopefully minimising the time spent screening irrelevant studies. We can only ever estimate this, because we don't have all the relevant info 6/N.
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
This means we can stop screening before we have seen all the potentially relevant documents. But to do this, and actually save some work, we need to know when to stop. This is where stopping criteria come in 😎 5/N.
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
These products employ ML (or AI if we want to sound fancy)-prioritised screening: we screen some records by hand, and use these to train a model to predict the relevance of further records, and screen these by hand in descending order of predicted relevance, then retrain 4/N
Tweet media one
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
There is also lots of great software that allows scientists the opportunity to benefit from AI/ML for screening without needing to program: @Covidence @EPPIReviewer @asreviewlab @rayyanapp @evidencepartner @PICO_Portal , . 3/N.
1
0
0
@MaxCallaghan5
Max Callaghan
6 months
There has been so much work done on ML for screening in systematic reviews and the vast majority of this work applies fancier and fancier models (the latest example being LLMs: to promise ever greater work savings 2/N.
1
0
1
@MaxCallaghan5
Max Callaghan
6 months
While I was away on parental leave, our paper was published on the urgent need for well-justified stopping criteria when using ML to speed up screening in systematic reviews: 1/N
Tweet media one
1
3
5
@MaxCallaghan5
Max Callaghan
9 months
RT @ESRIDublin: Today, we have published a new research bulletin titled 'The impact of planning and regulatory delays for energy infrastruc….
0
6
0
@MaxCallaghan5
Max Callaghan
11 months
RT @PeteOlusoga: This isnt going where you think it is. 🧵. One of the most annoying things about children is that they want you to stop wha….
0
2K
0
@MaxCallaghan5
Max Callaghan
1 year
Note that these are results from the @guardian this morning with 647 seats declared.
0
0
0
@MaxCallaghan5
Max Callaghan
1 year
Obvious caveats about the differences within parties categorised here as Left of Center, but these are all parties that are clear(ish) in their commitment to net zero. Suggests to me limited gains from a continued Tory focus on net zero costs and car culture wars.
1
0
1
@MaxCallaghan5
Max Callaghan
1 year
Looking at overall vote share rather than constituencies, these left of center parties picked up nearly 60% of the vote, compared to just under 40% for Cons+Reform
Tweet media one
1
0
0