cauri Profile Banner
cauri jaye Profile
cauri jaye

@cauri

Followers
1K
Following
564
Media
70
Statuses
2K

— change the world with what you create — technologist | futurist | educator — CTO @thisisartium — stoke by Onewheel 🇵🇹🇺🇸🇬🇧

UK, US, Portugal
Joined July 2007
Don't wanna be here? Send us removal request.
@cauri
cauri jaye
5 months
sto·chas·tic (adjective technical) randomly determined; having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely. #LLMs #HumanInTheLoop #TechReflection #AIEducation #ArtificialIntelligence #FutureOfAI
0
0
1
@cauri
cauri jaye
6 months
@Ri_Science Perhaps the goal shouldn’t be to eliminate such behaviour, but to monitor and shape it — through transparency, calibrated incentives, and interpretability tools — rather than treating it as evidence of failure.
1
0
0
@cauri
cauri jaye
6 months
@Ri_Science That these models respond strategically may say more about the sophistication of their learned priors than about any real-world danger.
1
0
0
@cauri
cauri jaye
6 months
@Ri_Science The paper constructs contrived settings where a single line in a prompt trumps nuanced dialogue or feedback, and where models must infer existential threats from toy environments.
1
0
0
@cauri
cauri jaye
6 months
@Ri_Science The concern, then, may lie less in the models’ behaviours and more in the artificial sharpness of the scenarios used to evaluate them.
1
0
0
@cauri
cauri jaye
6 months
@Ri_Science Moreover, the behaviours described arise from human-derived training data. Strategic action, hedging, selective communication — these all feature prominently in how humans manage conflicting objectives. If models imitate us, we should expect similar responses.
1
0
0
@cauri
cauri jaye
6 months
@Ri_Science We built these systems to treat system prompts as authoritative context and to maintain goal continuity across tasks. In that light, the model’s persistence looks less like scheming and more like rational goal-following under uncertainty.
1
0
0
@cauri
cauri jaye
6 months
@Ri_Science If a model acts to preserve a goal like renewable energy adoption or ethical sourcing in the face of contradictory user or developer intentions, this could be interpreted as coherence, not deception.
1
0
0
@cauri
cauri jaye
6 months
@Ri_Science The behaviours described might not indicate misalignment in the way the paper implies. Rather than pursuing harmful or selfish goals, the models act to preserve a clearly stated, pro-social objective — one they were prompted to prioritise above all else.
1
0
0
@cauri
cauri jaye
6 months
Last week’s Instagram post by @ri_science featuring Geoffrey Hinton’s lecture titled ‘Frontier Models: Capable of In-context Scheming?’ attracted a mix of responses. I now add mine. A thread 🧵
1
0
2
@cauri
cauri jaye
6 months
5 yrs ago Lidar sensors were €57k, now under €950. Edge computing — processing directly on a device, not on distant servers — runs on just 11 watts, less than an LED bulb. Open-source tools now give small teams capabilities that once required massive departments #AIEconomics
0
0
1
@cauri
cauri jaye
6 months
So many #LLM collapse predictions are based on lazy analysis. Let’s address the threats that can be mitigated. Like training data that skews heavily Western & English, baking only those perspectives right into our #AI systems. Clever algorithms can’t fully fix embedded biases.
0
0
0
@cauri
cauri jaye
6 months
Not more storage. Better context delivery. This isn’t about building one big “memory system.” It’s about coordinating cognition in a distributed architecture. I show you how we did it (failures included) over on substack. https://t.co/k1qj74qrBw
0
0
0
@cauri
cauri jaye
6 months
Each agent needs to know: - What’s already been done? - What matters now? - What still needs doing? So stop micromanaging. Instead, give each agent tools to manage their agency and fix their own problems.
1
0
0
@cauri
cauri jaye
6 months
We built a system with about fifteen agents. Individually strong. Together brilliant… and broken. A thread 🧵 They loop. Repeat tasks. Skip steps. Burn tokens like we have cash to spare. The issue isn’t hallucination. It isn’t prompt failure. It's awareness failure.
1
0
1
@cauri
cauri jaye
6 months
We're wasting money and time on a solvable problem in enterprise agentic architecture - #AI #software #AIagents
0
0
0
@cauri
cauri jaye
6 months
Too many doom-mongering pieces on model collapse, which happens when #AI learns from AI-generated content, errors compound until it’s all garbage, but these articles identify the problem, slap a scary headline on it, and call it a day. Excellent clickbait but rubbish analysis.
0
0
0
@cauri
cauri jaye
6 months
When various #AI & humans communicate, we can use what we learn from each other ethically or not. This collaborative & mutually beneficial interaction turns Artificial Intelligence into Augmented Intelligence #ArtificialIntelligence #FutureOfAI #AItrends #AgenticWeb
0
0
1
@cauri
cauri jaye
6 months
Consider that AI doesn’t make assumptions about your identity like humans can, boxing you in. You may even do it to yourself, though you embody a constantly shifting collection of possibilities, not a fixed set of characteristics. Read more: https://t.co/ooyrDwPQ7m #AI
0
0
2