Stephen MacNeil
@Stephen_MacNeil
Followers
399
Following
398
Media
11
Statuses
282
Assistant Professor @TempleCIS | YDC @WorldDesignOrg | Organizer @Design4SD | #HCI & #edu & #NLP Tackling Philly's hardest problems via community-driven design
Philadelphia
Joined June 2010
It’s an incredible feeling to have one of my first advisees (and good friend) come to visit my lab. The students were inspired and I was so proud. Today was a great day 🤩
An amazing day! Kicked it off at @PennHCI meeting in the morning, then headed over to @TempleHCI to give a talk in the afternoon. Big thanks to @drewmikehead and @Stephen_MacNeil for having me!!
1
0
8
An amazing day! Kicked it off at @PennHCI meeting in the morning, then headed over to @TempleHCI to give a talk in the afternoon. Big thanks to @drewmikehead and @Stephen_MacNeil for having me!!
1
2
28
How are computing students shifting their help-seeking strategies in response to LLMs and Generative AI tools (ChatGPT and Github CoPilot)? Our upcoming #ACE2024 paper shares insights from surveys and interviews with computing students. https://t.co/uGuepOPKMu
researchgate.net
PDF | Help-seeking is a critical way for students to learn new concepts, acquire new skills, and get unstuck when problem-solving in their computing... | Find, read and cite all the research you need...
0
1
3
SIGCSE discussions are heating up: LLMs threaten traditional assessment. To address this concern, some have adopted visual programming problems. In our new paper, GPT-4V solved 96.7% of visually represented problems. Is it time for #ungrading yet? https://t.co/AZckGQyuRO
researchgate.net
PDF | The advent of large language models is reshaping computing education. Recent research has demonstrated that these models can produce better... | Find, read and cite all the research you need on...
0
6
7
Stop chatting with a black box! Swing by our #uist2023 demo about making LLM memory management more explainable and interactive!
Stop chatting to a black box ! We developed Memory Sandbox: see, control, and steer how the conversational agent “sees” the conversation. Checkout our demo at #UIST23 Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents https://t.co/Lv5sKEzs6r
0
1
6
Interested in large language models and their potential applications in computing education? I’m looking for a (fully-funded) PhD student to study how novices use LLMs and how LLMs can be used to create educational resources @CSAalto. Find out more & apply
0
14
22
📢 My first co-authored demo paper explores providing users with capabilities to oversee and control how large language models, like #ChatGPT, perceive and retain conversational context. Less breakdown, more steering. 📄 Read the paper here:
arxiv.org
The recent advent of large language models (LLM) has resulted in high-performing conversational agents such as chatGPT. These agents must remember key information from an ongoing conversation to...
1
1
3
Had the privilege of working with my wonderful colleague (and former student) Jason Ding on paper that was awarded an honorable mention at C&C. Check it out, it is worth a read!
Wrapped up my paper presentation at @acm_cc just in time to catch my shuttle to work :) A truly enriching experience! If you are curious about the potential role of Large Language Models in creativity, do check out our video presentation -> https://t.co/wT4w1LTURy 😁
1
0
5
Our grandmother is trying to leave Ukraine. Please share info and resources about traveling from Odesa to Chișinău in DMs 🙏
1
0
1
To be clear, I did try to mitigate this bias as much as possible in my review, and it was a great paper.
0
0
0
I just included a positionality statement in my review! I’ve never seen such a statement in a review, but I thought it was important to be transparent about a bias I have against a specific design method. Would this be a useful criteria to require for future reviews? 🤔
1
0
9
It feels like we're at a critical moment for AI and civil society. There's a real possibility that the last 5+ years of (hard fought albeit still inadequate) improvements in responsible AI release practices will be obliterated.
3
34
124
Paid study opportunity: We are looking to chat with folks who share personal photos on online social media platforms. The focus of the interview will be on privacy-enhancing and anti-surveillance technologies for such photos. More here: https://t.co/QbD2hiCnb4 RTs appreciated!
1
9
16
We’re looking for summer undergrad research interns! Our REU Site on Pervasive Computing for Smart Health, Safety, & Well-being offers a 9 wk research experience at Temple Univ in Philadelphia. Students receive $6500 + housing. Apply by 1/31: https://t.co/ccE5o9A8To.
1
5
6
I am thrilled that our proposal to investigate AI-enabled communication devices for non-verbal speakers was funded! All 16 teams have such inspiring projects, it should make for an exciting NSF Convergence Accelerator cohort.
.@NSF is investing $12 million for 16 teams to develop use-inspired solutions to enhance the quality of life and employment opportunities for persons with disabilities. Details: https://t.co/Gt3R1eyZKh
3
2
14
What an honor to have our work featured by @DataSciNews! LLMs will likely have a big impact on education and our team is committed to ensuring that the impact is a positive one.
"Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book" Stephen MacNeil (@Stephen_MacNeil) and co-authors share their experiences generating multiple code explanation types using LLMs and integrating them 9/18
1
1
5
I am looking for PhD students for fall '23. If you are interested in the topic of content moderation or online harassment, apply to work with me at @RutgersCommInfo.
8
264
361
We are hiring at Tableau Research! Looking for early/fresh grads to come join our team! Areas include applied ML/NLP/AI, and HCI in the space of visual analytics and data visualization. https://t.co/iNnP029v2c
#Tableau #research
3
81
215
Along with a team of undergraduate researchers, we were able to generate 6 distinct explanation types: 1) line-by-line, 2) high-level summaries, 3) common misconceptions, 4) generating analogies, 5) fixing and explaining bugs, and 6) listing the relevant programming concepts
0
0
2