TRAILS
@trails_ai
Followers
601
Following
340
Media
78
Statuses
352
The NSF-NIST Institute for Trustworthy AI in Law & Society focuses on broad participation in AI, deep technical research, and informed governance of AI systems.
Joined May 2023
What does it take to advance AI literacy? Our experts—Virginia Byrne (@VirginiaLByrne) from @MorganStateU, David Broniatowski (@Broniatowski) from @gwuengineering, Hal Daumé III (@haldaume3) from @UofMaryland, and Brandeis Marshall (@csdoctorsister) of @DataedX_—explain how
6
8
8
A clever initiative from researchers in @UMDscience and @UMIACS, where humans team up with AI technology in quizbowl competitions to improve collaboration between humans and AI. #AIMaryland
today.umd.edu
UMD Researchers Revamp Quizbowl Competition to Gauge Trust and Collaboration Between People and Machines
1
6
9
In this week’s edition of #TrustTRAILS, Susan Ariel Aaronson (@AaronsonSusan) explains the important relationship between data and AI, and the need for more accurate, complete and representative datasets that are used in AI-infused systems. Aaronson is a research professor
0
3
3
“If you look at who is building AI technology in the world today, it tends to be a very narrow slice of the world’s population," says Hal Daumé III (@haldaume3), a professor of computer science @UofMaryland and director of @trails_ai. In this edition of #TrustTRAILS, Daumé
0
3
5
✈️ GW Engineering is helping NASA shape the future of flight. Prof. Peng Wei & our students built an AI-driven safety system, tested on drones, to keep skies safe as new tech like drones & air taxis take off. 🚀 #GWEngineering #NASA Read more:
engineering.gwu.edu
MAE Professor Peng Wei led the GW-NASA System-Wide Safety (SWS) collaboration to develop an in-time, learning-based aviation safety management system (ILASMS).
0
1
1
A team of @UofMaryland researchers led by Jordan Boyd-Graber (@boydgraber) is using a revamped #quizbowlformat to explore how #AI and humans can best work together. Their goal? Improve the evaluation of AI-infused question and answering systems by comparing how models and human
0
1
8
How can we best promote trust in AI? And how can we make sure that people are aware of the risks so that they can make informed decisions? In this week’s edition of #TrustTRAILS, faculty experts David Broniatowski (@Broniatowski), Susan Aaronson (@AaronsonSusan), Hal Daumé III
0
1
2
How do people make sense of what AI is doing, and how does that impact their trust of AI? In this week’s edition of #TrustTRAILS, David Broniatowski (@Broniatowski), Hal Daumé III (@haldaume3) and Brandeis Marshall (@csdoctorsister) explain how TRAILS is focused on developing,
0
1
4
Our new guardian model lets you create LLM guardrails using natural text. This little 8B model efficiently checks in real time whether chatbots comply with bespoke moderation policies. It's not often that academics beats industry models, but DynaGuard stacks up well!
Guardrails with custom polices are hard for models trained on safety and harm-related datasets. But what if you trained a guardian model on arbitrary rules? Introducing DynaGuard, a guardian model for custom policies: https://t.co/oPWOZstRUQ
1
8
32
In this week’s edition of #TrustTRAILS, Virginia Byrne (@VirginiaLByrne) from @MorganStateU is examining AI in the lives of youth, teachers, parents and all types of educators. She discusses both the positive and potential negative aspects of AI technologies currently in use.
0
1
1
In a @marylandmatters article discussing how Maryland schools must adapt to AI, TRAILS' Jing Liu (@DrJingLiu) emphasizes the need for fast, evidence-based research to guide policies and ensure students use AI responsibly. Read more: https://t.co/KeSNVBkRhU
0
0
0
What is AI really? Hal Daumé III and David Broniatowski from the Institute for Trustworthy AI in Law & Society (TRAILS) explain the different types of AI, the current challenges with those systems, and how TRAILS is addressing them #TrustTRAILS
0
3
2
TRAILS' Katie Shilton was interviewed for a GBH news article on how educators are leaning in on the use of AI in the classroom. In her own classes, reports @KirkCarapezza, Shilton asks students to use AI in ways that align with their values and goals. AI tutors, for example,
0
1
2
Thrilled to share that our paper, “Gaming Tool Preferences in Agentic LLMs” was accepted to EMNLP 2025: https://t.co/NAwmzNPpNa Tools make agentic AI powerful, but today many models choose them based on descriptions: Add a single assertive cue to a tool description, e.g., “This
0
7
24
How good (or bad) is GPT-5 — and does it matter for you? I’ve been seeing a lot of posts lately debating the quality of GPT-5’s responses. I tried a few of the examples people mentioned. Here’s one from my own experiment (screenshot attached): I asked GPT-5 to solve a simple
7
7
35
Morgan's Momentum cannot be stopped! As the nation's third-largest HBCU progresses toward becoming a Carnegie-classified very high research (R1) university, the @BaltimoreBanner reports on how the National Treasure is getting there. Read the article:
thebanner.com
In 2020, Morgan State brought in just $17.2 million in research funding and graduated 71 doctoral students.
1
36
82