enisebytes Profile Banner
enise👩🏻‍💻 Profile
enise👩🏻‍💻

@enisebytes

Followers
82
Following
866
Media
21
Statuses
100

coding, tech, data science, life, explainable ai | en,tr | https://t.co/MtJiFDtf2q | "A good decision is based on knowledge and not on numbers."

Germany
Joined April 2025
Don't wanna be here? Send us removal request.
@enisebytes
enise👩🏻‍💻
11 days
lets give ai access to gov resources to help solve world hunger/poverty/immigration problems
Tweet media one
0
0
11
@enisebytes
enise👩🏻‍💻
13 hours
Ben kendi hayatımı bile böyle idame ettiriyorum
Tweet media one
@yunusemreyaln3
yunus emre yalın
1 day
@a_erdem4 bir şey sorcam ya dsciler veya herhangi data related iştekiler tam olarak ne yapıyor table düzenleyip yorumluyo musunuz sabah akşam bu mudur.
1
0
2
@grok
Grok
6 days
What do you want to know?.
516
322
2K
@enisebytes
enise👩🏻‍💻
1 day
RT @carolyndiary: yolsuzluk ve güçle alakalı bi makale okuyordum ve güç kazanan erkeklerin taciz eğiliminin daha fazla oldugunu soyluyor. M….
0
6
0
@enisebytes
enise👩🏻‍💻
1 day
RT @benfurkankilic: Evre maalesef vefat etmiş, ailesinin ve sevenlerinin başı sağolsun. Kendisinin tweetleri her önüme düştüğünde gülümseme….
0
4
0
@enisebytes
enise👩🏻‍💻
2 days
So when we open up a model, we shouldn’t expect clean, one-to-one mappings. Features are compressed together. Understanding and separating them is actually key to mechanistic interpretability.
1
0
5
@enisebytes
enise👩🏻‍💻
2 days
Using toy ReLU networks (Elhage et al. (2022)) , they showed phase changes: sometimes a feature is ignored, sometimes it gets its own neuron, sometimes it’s stored in superposition. The overlap works, but it makes networks harder to interpret.
1
0
5
@enisebytes
enise👩🏻‍💻
2 days
So features don’t get their own neat neuron (like in hypothetical disentangled model), they overlap and share space by simulating a larger sparse model and store in almost-orthogonal directions. This happens because models try to store more features than they have neurons for.
Tweet media one
1
0
5
@enisebytes
enise👩🏻‍💻
2 days
In neural networks, a single neuron is generally responsible for more than just one thing. Instead of “this neuron = cat ear” we often see one neuron mixing many different features. This called superposition hypothesis.
1
0
8
@enisebytes
enise👩🏻‍💻
3 days
RT @dorotheagibi: selamlar, çok fazla ismin ifşa olması sebebiyle gördüğünüz twitter, instagram veya haber kaynaklarını ekleyebileceğiniz b….
0
236
0
@enisebytes
enise👩🏻‍💻
3 days
RT @aayseaktag: fail aklayıcılık nedir:.-ne yapalım bir hata yaptı diye tamamen dışlayalım mı?.-ben onu yıllardır tanırım, yapmaz öyle şey….
0
22
0
@enisebytes
enise👩🏻‍💻
3 days
Different angles, as in technical vs philosophical but both highlight the same tension. AI may look like it thinks, but whether it ever truly can remains an open question.
1
0
6
@enisebytes
enise👩🏻‍💻
3 days
In my note on physicalism, I argued that AI is bound by determinism programs and data shape every outcome. There is no free will, so AI cannot develop a mind in the dualistic sense.
1
0
5
@enisebytes
enise👩🏻‍💻
3 days
Apple looks at reasoning models and shows that they succeed only up to a point. When problems get harder, the thinking illusion breaks down and the models just stop trying.
1
0
5
@enisebytes
enise👩🏻‍💻
3 days
Reading Apple’s paper The Illusion of Thinking reminded me of a short piece I once wrote on AI and physicalism. Both ask the same question in different ways: can AI really think?.
1
0
6
@enisebytes
enise👩🏻‍💻
3 days
RT @fluchtperioden: BU BİR TACİZCİ İFŞALAMA FLOODUDUR. Bu dövmeci Antalya’da bulunuyor. 2 yıl önce bir kadın arkadaşımız burada dövme yaptı….
0
805
0
@enisebytes
enise👩🏻‍💻
4 days
RT @enisebytes: KAN uses splines for every connection, removing millions of weight values. Nodes only sum inputs. This design makes the net….
0
1
0
@enisebytes
enise👩🏻‍💻
4 days
Since the first model, versions like KAN 2.0, PRKAN, AF-KAN, and seqKAN pushed the idea into science, time series, and more. Still early days, but KAN may become the bridge between black-box AI and interpretable learning, aka explainable AI! #ExplainableAI.
0
0
5
@enisebytes
enise👩🏻‍💻
4 days
KAN uses splines for every connection, removing millions of weight values. Nodes only sum inputs. This design makes the network more transparent and easier to interpret.
Tweet media one
1
1
7
@enisebytes
enise👩🏻‍💻
4 days
Spline functions are like small polynomials stitched together to form a smooth curve. Instead of one big equation, the model learns many small curves that connect smootly.
1
0
5