err_mk Profile Banner
Marko Profile
Marko

@err_mk

Followers
156
Following
392
Media
383
Statuses
842

Exploring what future tech means for our existential questions: AI, thinking machines, digital minds, longevity & other

Joined June 2025
Don't wanna be here? Send us removal request.
@err_mk
Marko
24 hours
Vibe coding is great for prototyping. Once you stack on production detail, the model gets stuck in general concepts and loses itself in the details. The hidden cost is specification debt. Without an executable spec the model optimises for plausibility, not correctness. There
Tweet media one
0
0
0
@err_mk
Marko
1 day
Wish we could skip ten years and dodge today’s AI teething problems in big vendor software. Still, we would only meet their heirs.
0
0
0
@grok
Grok
6 days
What do you want to know?.
469
303
2K
@err_mk
Marko
1 day
Maximising value in LLM outputs isn’t magic. Low temperature sharpens facts, top-p balances breadth, penalties kill fluff. Add self-review loops and user-centric framing, and every sentence earns its keep. The result: dense, accurate, actionable insights.
Tweet media one
0
0
0
@err_mk
Marko
1 day
Cheap LLM built apps will flood app stores, but only distinct human creativity will create durable value. Fresh human generated data will become the scarce fuel LLM vendors will need to differentiate. So,,, I believe, human-in-the-loop will be mantra for the next 10+ years.
0
0
0
@err_mk
Marko
1 day
I believe that emerging software engineering specialisations will include:.(1) multimodal data fusion and semantic knowledge graphs for robust information systems,.(2) GenAI-driven automation, and.(3) adaptive UI/UX orchestration via low-code integrations.
0
0
0
@err_mk
Marko
1 day
Floats are a map. Reals are the territory.
0
0
0
@err_mk
Marko
2 days
Two AI bottlenecks. Data integration: turn chaos into context. Knowledge representation: turn context into computation. Solve these. Everything else compounds.
0
0
0
@err_mk
Marko
2 days
Brains vs ANNs: . Learning: Brains grow and prune synapses. ANNs tweak weights in fixed wiring. Topology: Brains = messy, modular, small-world graphs. ANNs = clean, layered grids. Neurons: Brain has 50–200 types + glia. ANN “neurons” are identical math boxes. Signals: Brain.
0
0
0
@err_mk
Marko
5 days
Ja sam za društvo u kojem svatko može naučiti od drugog i u kojem svatko želi naučiti nešto od drugog.
0
0
0
@err_mk
Marko
5 days
⁠ Algorithmic entropy: the entropy of a string of symbols is the length of the shortest computer program that prints it out. ⁠. Here is the hint, when using KiloCode, Windsurf or similar, tell it to generate the code with minimal entropy. Here is another hint. You can use.
0
0
0
@err_mk
Marko
5 days
Korelacija ne podrazumijeva i uzročno-posljedičnu vezu i dok se ne dokaže uzročno-posljedična veza, svako isticanje korelacije je pokušaj zavaravanja.
0
0
0
@err_mk
Marko
6 days
Neuroscientist evaluates the “ChatGPT Makes You Dumb” study
0
0
7
@err_mk
Marko
6 days
Indeed, Europe is dying. Bureaucracy has been failing to solve the demographic problem for last 20 years. If only bureaucracy would die instead of. But bureaucracy continues to flourish and grow!.
@elonmusk
Elon Musk
6 days
Europe is dying.
0
0
0
@err_mk
Marko
6 days
AI implementation's real boss level: data fusion & knowledge rep. Ditch the code-first illusion, this needs broader chops. Skip sales/marketing roulette (this hits revenue). Start smart: integrate data, snag quick ops wins, scale sideways. Build skills, data IQ, process savvy on.
0
0
0
@err_mk
Marko
6 days
I always loved the cyberpunk vibe.
@rohanpaul_ai
Rohan Paul
7 days
AI lends itself well to generating a cyberpunk Street Life aesthetic. --. reddit .com/r/aivideos/comments/1mrve8k/cyberpunk_street_life/
0
0
1
@err_mk
Marko
8 days
Ono kad Jutarnji list objavi članak koji se sastoji od screenshotova postova političkih lidera na X-u.
0
0
0
@err_mk
Marko
8 days
FYI. OpenAI’s API does not accept learned prefix embeddings as input. Workaround:.Approximate GraphToken by projecting the graph encoder output into discrete special tokens and feed them as text. For stronger reliability, fine tuning can teach the model to interpret that token.
0
0
0
@err_mk
Marko
8 days
Just a note. Closed APIs that accept text only cannot ingest continuous prompts directly. In that case, for GraphToken use an open model interface that supports prefix embeddings or consider a token level translator approach such as GraphTranslator which learns sequences the LLM.
1
0
0
@err_mk
Marko
8 days
Use GraphToken when you control training and need the LLM to natively read graphs through a small learned prefix. It gives the largest single model gains on graph reasoning tasks. Use RoG when you have a curated KG and require faithful, interpretable paths with state of the art.
1
0
0
@err_mk
Marko
8 days
Use a graph transformer or a GFM whenever the target is relational and non Euclidean. Use an LLM only as a language and coordination layer, or equip it with graph structure through GraphToken, RoG or GraphRAG.
1
0
0