Techni-Calli
@Iwillleavenow
Followers
14K
Following
30K
Media
4K
Statuses
37K
AI/Human Rights Lead and Global Privacy Counsel at @EPICprivacy, privacy advocate, nerd. Personal opinions, terrible jokes. illleavenow on butterfly site.
Joined February 2015
I want to create a repository of some of the more relevant threads I've created over the years all in one easier-to-find spot, so here you go. Thread-ception. A thread of threads. For all your privacy, law, tech, and terrible movie needs.
10
32
201
Or just get a goddamn slow-cooker, they're cheap as hell and won't try to mine your data or sell you "cooking but with bullshit."
Introducing Replicator Fully automated smart private chef no more hustle cooking. no more overspending on food delivery. no more eating that same salad bowl and burger every day. nutritious. home-cooked. delicious food. COMMENT below to get on our beta tester waitlist!
0
3
6
I mean, it's actively making my life and the world worse but you and the other AI-pushers have given us no option to get rid of it. Unless you're saying we should all get rid of Microsoft, which is a bold and fascinating stance.
here’s my litmus test: is AI improving your day to day life? Is it actually helping you to create, connect, feel joy, chase ambition? If not - what’s the point?
0
4
11
"It's measurement, not causation" We have evidence that a massive number of ChatGPT users are experiencing dangerous effects. For any other product, we would pause or put in protections until we're sure the product isn't hurting people. AI does not get a pass here.
3
0
10
"Adults should be able to make their own decisions" Yes. Adults should be able to make their own FULLY INFORMED decisions. How many do you think fully understand how AI works and what the risks are? And you missed that many AI-linked suicides are not adults. They're teens.
3
0
11
(This is the same company that, after years of claiming AI would help cure cancer, now is making AI porn, there is no altruistic mission, they just want your money.)
3
7
35
The company's own measurements are showing the thing is actively bad for its users but OpenAI is still actively pushing for FEWER regulations on AI. They do not care about hurting people.
1
6
36
According to THEIR OWN ESTIMATES, around 560,000 people PER WEEK exchange messages with ChatGPT indicating mania or psychosis and 2.4 MILLION MORE express suicidal ideation or prioritize ChatGPT over loved ones, school, or work. PULL. YOUR. PRODUCT. https://t.co/u3uqoqVfBA
wired.com
OpenAI says hundreds of thousands of ChatGPT users may show signs of manic or psychotic crisis every week.
9
92
388
(Yes, I am aware that the problem is with my simple midwestern tourist brain and New Yorkers navigate this easily all the time, I just love a good juxtaposition.)
0
1
5
Absolutely incredible that NYC's streets are laid out in a simple grid that any child could figure out and NYC's public transportation is a complex riddle that can only be solved with blood runes and a blessing from the fickle Alley God* *I think it is a pigeon but can't be sure
1
1
5
(Spoiler alert, tho, EPIC's putting out some major reports v soon, get ready for some light* reading) *Several hundred pages
0
1
3
That feeling when you're putting out three major reports and attending a multi-day communications training across the country all in one week:
1
1
3
Do you think @Microsoft has been paying attention to how many searches for "how to remove CoPilot" have come up since they attached that monstrosity to the tool bar?
2
1
19
Btw, if you haven't read Empire of AI, I highly recommend you do, it is bracingly enlightening about exactly who is running OpenAI and what their priorities are.
0
0
4
I almost have to respect the grifter hustle of a man who raises billions for a product claiming it will cure cancer and somehow keeps those billions when it turns out it just does things we already do but worse and through theft. Almost.
5
10
41
"Don't hate the player, hate the game." Sorry to hear you're unable to multitask your hatred, guess I'm just built different.
4
4
13
Earlier this year, after receiving emails from people in the throes of AI psychosis, I began to keep in touch with one man as he journeyed to recovery. I'm grateful to James that he was willing to go on record. He wanted people to know: humanlike chatbots are dangerous.
NEW: People are developing antisocial and obsessive behavior after using AI — some have even taken their own lives. One journalist started getting emails from people in mental health crisis after using AI. She dug in, and found companies putting profits over users' lives.
7
173
641