ARC_Guide Profile Banner
Starc Institute Profile
Starc Institute

@ARC_Guide

Followers
54
Following
172
Media
5
Statuses
128

Starc Institute: A boundaryless academy. Follow us for REAL discussion of papers (no hype) and technical blogs and more. https://t.co/mu3ZxzqLnW

Joined June 2025
Don't wanna be here? Send us removal request.
@ARC_Guide
Starc Institute
19 hours
Today’s highlight of STARC paper discussion: DRIP: Defending Prompt Injection via De-instruction Training and Residual Fusion Model Architecture. https://t.co/pm6GHC1syS The paper proposes a new training-time defense that aims to teach large language models to separate
Tweet card summary image
arxiv.org
Large language models (LLMs) have demonstrated impressive instruction-following capabilities. However, these capabilities also expose models to prompt injection attacks, where maliciously crafted...
0
0
3
@ARC_Guide
Starc Institute
2 days
Today's highlight of STARC paper discussion: Are Language Models Sufficient Reasoners? A Perspective from Logic Programming https://t.co/5V4pE4hTEr The paper examines reasoning efficiency through a precise lens: distinguishing necessary from unnecessary deduction. Current
Tweet card summary image
arxiv.org
Modern language models (LMs) exhibit strong deductive reasoning capabilities, yet standard evaluations emphasize correctness while overlooking a key aspect of human-like reasoning: efficiency. In...
0
0
5
@jiqizhixin
机器之心 JIQIZHIXIN
5 days
Sakana AI is building artificial life and they can evolve! Petri Dish Neural Cellular Automata (PD-NCA) let multiple NCA agents learn and adapt during simulation, not just after training. Each cell updates its own parameters via gradient descent, turning morphogenesis into a
10
32
200
@ARC_Guide
Starc Institute
5 days
From STARC discussion: e1: Learning Adaptive Control of Reasoning Effort https://t.co/KWZpvb0bJl The work explores how to let users dynamically trade off reasoning depth against compute cost. Instead of forcing a fixed token budget, the authors introduce 🚀Adaptive Effort
Tweet card summary image
arxiv.org
Increasing the thinking budget of AI models can significantly improve accuracy, but not all questions warrant the same amount of reasoning. Users may prefer to allocate different amounts of...
0
0
3
@ARC_Guide
Starc Institute
6 days
From STARC discussion group: Two recent papers tackle the same challenge from different angles—how to make language agents actually learn from experience rather than restart every task from scratch. 🗞️Contextual Experience Replay for Self-Improvement of Language Agents🗞️
0
0
3
@ARC_Guide
Starc Institute
7 days
From this STARC Discussion of papers: DICE: Dynamic In-Context Example Selection in LLM Agents The paper explores why large language models struggle with unstable in-context learning—why a good prompt works wonders one moment and collapses the next. The key claim is that
0
0
2
@ARC_Guide
Starc Institute
8 days
From this week’s discussion group: PIShield: Detecting Prompt Injection Attacks via Intrinsic LLM Features https://t.co/pKxc6kMuj9 The paper proposes a lightweight detector for prompt injections based on residual stream vectors inside large language models. Instead of
Tweet card summary image
arxiv.org
LLM-integrated applications are vulnerable to prompt injection attacks, where an attacker contaminates the input to inject malicious prompts, causing the LLM to follow the attacker's intent...
0
0
2
@ARC_Guide
Starc Institute
8 days
From this week’s discussion group: PromptLocate: Localizing Prompt Injection Attacks https://t.co/Mh9Rp9LqCJ from @Yuqi_Jia7 Yupei Liu, Zedian Shao, @jinyuan_jia @NeilGong 📕The paper introduces a structured pipeline for isolating malicious segments inside long prompts. The
0
0
4
@ARC_Guide
Starc Institute
8 days
We live in an age where research moves fast, but reflection falls behind. Projects multiply. Metrics rise. But meaning fades. STARC was born as an act of resistance. We are building a research commons—a place where conversations are real, where papers are read before they are
0
0
5
@ARC_Guide
Starc Institute
9 days
Scales++: Compute‑Efficient Evaluation Subset Selection with Cognitive Scales Embeddings https://t.co/IHrE1keP0i ⚙️ A fresh approach to evaluating large language models (LLMs) and other AI systems. Their premise: rather than evaluate on a full, large benchmark (costly
Tweet card summary image
arxiv.org
The prohibitive cost of evaluating large language models (LLMs) on comprehensive benchmarks necessitates the creation of small yet representative data subsets (i.e., tiny benchmarks) that enable...
0
0
3
@ARC_Guide
Starc Institute
10 days
LLMs Process Lists With General Filter Heads https://t.co/2eT4k0VY7Z 🧠 The paper explores how large language models (LLMs) handle the classic “list‑processing” task: given a list of items and a predicate, the model must select the relevant subset. What they discover is
Tweet card summary image
arxiv.org
We investigate the mechanisms underlying a range of list-processing tasks in LLMs, and we find that LLMs have learned to encode a compact, causal representation of a general filtering operation...
1
0
0
@ARC_Guide
Starc Institute
10 days
@lisabdunlap Totally, they found a formula to gain attention, and the quality/contents of the paper do not matter as much as attention grabbing factors of that paper. I believe the academia will eventually become clever enough to these games, so I start mine. (I’m just starting, feedbacks are
0
1
4
@oprydai
Mustafa
11 days
you can predict your future just by looking at your daily routine. people overcomplicate “manifestation,” “luck,” or “fate.” but the truth is; your future isn’t hidden. it’s coded in your habits, running like a deterministic algorithm. if you want to see where you’ll be in 5
19
144
839
@ARC_Guide
Starc Institute
11 days
@pfau Maybe, but there are still many disciplined scholars in AI, the sad part is that these scientists are usually so disciplined that they don’t try as hard to gain visibility as those mediocre ones (often happen to be social media gurus)
0
1
3
@ARC_Guide
Starc Institute
11 days
The Oversight Game: Learning to Cooperatively Balance an AI Agent’s Safety and Autonomy https://t.co/3FaGUFlldf 🤝a novel formulation of the human‑AI interaction problem as a two‑player Markov Game: the AI agent chooses whether to act or ask for oversight, the human
Tweet card summary image
arxiv.org
As increasingly capable agents are deployed, a central safety question is how to retain meaningful human control without modifying the underlying system. We study a minimal control interface where...
0
0
1
@ScholarshipfPhd
Scholarship for PhD
11 days
The real PhD experience: Excitement → Self-doubt → Breakthrough → Imposter syndrome → Confidence → Rejection → Existential crisis → Persistence → Repeat That graph isn't showing progress over time. It's showing your emotional state over the course of a single week.
5
80
381
@ARC_Guide
Starc Institute
11 days
Same for academia, some publications are products, some are projects.
@Rapahelz
Raph. H.
11 days
@zebulgar New iPhones are products while old iPhones were projects. A product is the incarnation of a "frozen system of ideas". A project is the incarnation of a "living system of ideas".
0
0
2
@ARC_Guide
Starc Institute
12 days
LLMs Process Lists With General Filter Heads https://t.co/2eT4k0VY7Z 🧠 A fascinating probe into how large language models (LLMs) handle list‑processing tasks: The authors identify what they call “filter heads”: a handful of attention heads that learn to encode a
Tweet card summary image
arxiv.org
We investigate the mechanisms underlying a range of list-processing tasks in LLMs, and we find that LLMs have learned to encode a compact, causal representation of a general filtering operation...
0
0
2
@ARC_Guide
Starc Institute
12 days
@ErnestRyu Agreed, and probably not just math. LLMs will be used widely for almost everything. In fifty years, LLMs will probably be just normalized as how we view calculators, or even bikes, today.
1
1
16
@ARC_Guide
Starc Institute
13 days
TheraMind: A Strategic and Adaptive Agent for Longitudinal Psychological Counseling https://t.co/zIuI1JWlhQ 🧠 Many therapy-as-chatbot demos impress in single sessions, then stumble across weeks—where real counseling lives. TheraMind proposes a dual‑loop agent that explicitly
Tweet card summary image
arxiv.org
Large language models (LLMs) in psychological counseling have attracted increasing attention. However, existing approaches often lack emotional understanding, adaptive strategies, and the use of...
0
0
2