Very very excited to share that
@e_salvaggio
and I's collaborative art piece, called "Because of You" -- has been accepted to the
@CVPR
2024 AI Art Gallery!
A short thread:
This is kind of mean. Professors, don’t be mean. These are kids. Lord knows how naive and excited I was about random different papers that sparked research interest in me when I was an undergrad. Are kids not meant to have that naïveté at that age anymore?
I don’t mind when prospective PhD applicants email me wanting to talk about work I did 5 years ago. As long as they don’t mind my enthusiasm about instead following up on their work at the middle school science fair.
Maybe I don't understand American culture but is it normal for your friend to invite you for dinner a week in advance out to their multimillion-dollar new home in the suburbs, get pizza and then charge you on Venmo for it?
#AskingForAFriend
I emailed Dr. Alan Mislove about his test of time paper that came out in 2004!?!? way back in 2017 and he invited me for an internship. Can’t even imagine how hurt I would be if he responded like that ^
I’m just saying, none of {ICML, NeurIPS, ICLR, AAAI} have been held in India. A lot of Indian representation you see in these conferences are Indian students studying in western institutions, but there’s a lot of untapped potential if you remove onerous visa requirements.
I had to travel 26 hours and spend $2000+ to join
#ICLR2023
in Rwanda.
But people in Africa have to do this every time a conference is held in US.
What happens when we make it easier to participate?
1530% higher registrations from Africa.
This is important and must continue.
One of my really good friends is an actual medical doctor and he thought LLM based medical bots must be accurate because they must have some form of an internal decision tree explaining why a particular diagnosis was given. Can we take a second to better scope claims please
@katriaraden
Vastly better and more accessible healthcare at cheaper prices enabled through foundation models. More individualized and thus better education through private AI tutors. And yes, extremely low cost entertainment (books, movies, art) for all.
Not sure why you'd assume otherwise.
People saying “but humans create content after being inspired by others” have never had to undergo the humiliation of filing DMCA takedown requests to sketchy websites openly posting your painstakingly created work, with or without some sort of watermark removal.
This is not only flat out not true but also tech companies are firing ethics, human rights and compliance teams at an alarming speed, so even if the first comment were true (it’s not) we as a consumer body still need protections
WATCH: Former Google CEO
@ericschmidt
tells
#MTP
Reports the companies developing AI should be the ones to establish industry guardrails — not policy makers.
“There’s no way a non-industry person can understand what’s possible.”
Microsoft's FATE team was the last big respected place for Fairness research and even that has got hit by capitalism induced layoffs, at the same time as when Microsoft is pouring money into ads showing off "the new Bing".
Generative AI without attribution is theft. No ifs and buts about it. Even if the citation list is a million lines long it should be provided. Anything less is theft.
There’s a special irony…indeed, cruelty born of indifference, that the book that
@susie_alegre
discovered was being plagiarized via ChatGPT was “Freedom to Think.” A great reminder that Freedom requires context. Who’s being freed, & who’s being imprisoned?
🙏🏼
@vanessathorpe
Successfully defended my PhD today. To my advisor
@bowlinearl
, thank you for the last four years. I had so much fun! Thanks to also my panel (
@AlinaMOprea
,
@tinaeliassi
and
@KLdivergence
), mentors, friends, and family 🫶. I’d not be here without you all. Dreams do come true ❤️
Maybe I don't understand American culture but is it normal for your friend to invite you for dinner a week in advance out to their multimillion-dollar new home in the suburbs, get pizza and then charge you on Venmo for it?
#AskingForAFriend
I find it very annoying when media gives exposure to people far removed from Responsible AI research just because it’s a hot topic and not actual emerging scholars who are getting a PhD in the field. Perhaps I’ll be asked to opine on the Oscars?
This is probably not a big deal for most of the people I follow on here but I just wanted to share that I have finally crossed 100 citations and I am really happy about it 🥹
Apparently some people have issue with the word “kids”. Fair, I meant junior scholars who look up to you and are bright eyed and curious and have the kind of enthusiasm that mentors should positively encourage instead of swatting them away like flies.
(Screenshotted to avoid trolls) yesterday I chatted with a doctor at Harvard repeatedly pushing back against the use of generative models in anything that is interactive with a patient, because of lack of citations and chances of misinformation. This is wild.
@FAccTConference
chairs especially: please read this. Our lab works in Algorithmic auditing and every third paper review (FAccT or elsewhere) is like "so you found problems, but please also suggest actionable solutions". This culture needs to change.
This idea that you can't highlight problems without offering a solution is pervasive, harmful, and false.
Efforts to accurately identify, analyze, & understand risks & harms are valuable. And most difficult problems are not going to be solved in a single paper.
I remember the day when I got my PhD, Dr. Rumman Chowdhury personally called me from Vienna to congratulate me since she couldn’t attend my defense and said “you’re a Dr. now, use that title with pride, they don’t just hand them out to people and it was a long journey”.
In the wake of FAccT decisions, I’ve seen a few tweets similar to “If <insert famous researcher here>’s paper didn’t get in, who the hell is getting in” or “weird new direction for FAccT if <famous researcher> is being excluded” and I just want to say that this is problematic.
The more I read LLM papers (and not hype articles) the more I am starting to believe that AI related existential crises are 1. A hoax to keep development closed source so companies can profit off of models 2. A cash grab for people who are good at marketing but bad at CS 🤷🏽♂️
Indiscriminate, uncontrolled piracy is why I stopped making android apps as a way of supporting myself. That’s what I used to do for pocket money in college, then I switched fields because of all the theft. I’ve never been a professional artist but I empathize re: GenAI theft.
Like this audacity is not coming from anyone who ever made anything to sell to others as a form of livelihood and saw it being pirated or copied. It’s coming from people who have always been on the other side of the line, consuming content indiscriminately almost as a right.
I tried to code a new project at my job using chatGPT as a sort of personal experiment and everything it generated had insidious bugs, to the point that I finally deleted large portions and rewrote it myself and ended up spending more time overall.
[spoiler] The new season of black mirror is 💀💀💀
Using TOS enabled surveillance to steal people’s info and then use generative AI to recreate them for a TV show in real time? It’s like they looked at all of the stuff FAccT people discuss and made the worst possible outcome. 🤯
It’s a bittersweet feeling. Today is my last day at
@adept_id
. It was the first big boy job I’ve had after graduating and I couldn’t have asked for better coworkers and a fantastic social oriented mission to use technology for good!
Why is this person on a TIME editorial. News media, please do better. Interview actual researchers. For starters, go to the proceedings of FAccT, AIES, EAAMO, etc online, see the list of papers, click on the list of authors, and reach out to them. It’s not that hard.
It is so disturbing that papers like sparks, basically long form ads for closed source products masquerading as a research paper, will rack up thousands of citations because companies have money and PR, while independent research labs are held to much higher scientific standards.
Seeing my family after more than two years and eating my mom's food (that I am thankfully able to smell and taste) made me so emotionally overwhelmed that I ugly cried for a good 10 mins. I know I missed them but dear God 😭
Incredibly grateful that both my family and my partner’s family were able to be physically present at my PhD hooding ceremony at
@Northeastern
@KhouryCollege
. Also ft. my advisor
@bowlinearl
, who was visibly more excited than I was 🥹🥰
I will say, I have been extremely lucky to have had all 3 of my internships, and now my full time job, in exactly what I trained in (Algorithmic Fairness). It feels great to be leveraged for your expertise :)
This is probably not a big deal for most of the people I follow on here but I just wanted to share that I have finally crossed 100 citations and I am really happy about it 🥹
As promised! My team at
@adept_id
is hiring an MS/PhD Data Science Research Intern for Spring/Summer 2024, specifically with a focus on Fairness and Bias in Machine Learning/AI. The intern will be working on internal research problems with a goal of publishing the findings.
When I first moved to Boston I started adopting an American way of pronouncing words because it just makes life easier here, but over time I have slowly started to revert to my Indian accent. I quite like how I speak and if someone doesn't like it that's their problem :P
QueerInAI is by far the loveliest voluntary organization I have been a part of, and I’m glad to share that our paper got accepted at FAccT 2023! ❤️❤️❤️
After six years of hard work + research + organizing, we could not be prouder to present our
#FAccT2023
paper on Queer in AI as a case study for community-led participatory design in AI! . Read on to learn more about our endeavors 🌈🏳️⚧️ (1/10)
I want to take a 2 month break just to read papers because there's SO MUCH to read and so little time, and looking at tweet blurbs gives me more anxiety than information tbh
I suspect this is partly why the rebranding from Ethics to Safety has been happening - the waning credibility due to the overwhelming negativity, albeit deserved. LLMs are here to stay, and so are image generation models. We have to accept it and work to limit harms.
There’s one existential risk I’m certain LLMs pose and that’s to the credibility of the field of FAccT / Ethical AI if we keep pushing the snake oil narrative about them.
It continues to annoy me how Bengali is the 6th or 7th most spoken language in the world and yet almost all LLMs produce content in Bengali that is comparable to truly low resource languages. Clearly data availability is not a concern, it's development priorities.
Really annoyed by some ethics people on here tweeting things like “some people in the field are posting as usual, I see you”. Good job on the virtue signaling, some of us are on visas in a foreign country with strict rules and don’t want to get flagged for tweeting politics.
Very excited to share that our proposed CRAFT session: "Towards an India-first Responsible AI research agenda" has been accepted at
@FAccTConference
! To think that this was born out of me shitposting about Bangalore traffic and
@divy93t
asking to meet for coffee :) Details soon!
I care about technology based societal progress as much as the next guy but these products are being shipped to people who are necessarily not ML practitioners or researchers and unregulated use as such is an abuse of the faith they put in us to build safe tools.
@okarthik42
@aravindr93
Application and research are two different things, research is far more competitive - at least 4/5th of research papers in conferences see rejection, while code from those same papers would be GitHub hits.
My family visited me for the first time in the US for my graduation/hooding. They just left. My brother gave me this really nice card which I wanted to share. Being a first gen college grad is hard, let alone having to do so alone in another country. Feels nice to be seen 🥹❤️
From Reddit. Google employees having a meltdown because someone leaked the number of parameters -- which -- tells us not all that much anyway, is the start of an annoying and ultimately anti collaborative trend that OpenAI perpetuated.
The two internships I did with the ML Ethics team at Twitter were not only one of the most interesting research problem solving things I have done in my PhD career, but it also made me feel part of a team that basically feels like a second family at this point.
There are people whose papers got accepted for the first time. Imagine how they might feel when they see such comments. Write to the ACs or GCs, do strong rebuttals, but please don’t make someone else feel unworthy. Just makes the community less accepting imo.
Chicken Rezala (Bengali) is comparable or better than Tikka Masala and it is a shame that mainstream indian restaurants around here don't serve it. Made some recently and it was pretty easy to make and tasted HEAVENLY.
Stock photo vs my attempt ( I didn't have red chillies)
Idk if something changed during the course of my PhD but I used to be super competitive as a freshman and now I am the kind of person who enjoys giving compliments to others way more than getting them. It's all the fun brilliant folks I keep meeting!
Just submitted my Responsible ML special topics course listing at Northeastern to the Dean. I’m stoked to finally lead a class on my own and hope it’s useful for students who take it! ❤️
Now I have to figure out fun project ideas, but that’s the best part.
Currently we verify if ChatGPT is lying by googling...what will happen if BARD becomes public? Will we have to resort to asking human experts? Will untrustworthy AI destroy search engines and bring us back to pre-internet levels of information retrieval?
PSA: Researchers, please consider not using classification datasets (Compas, German credit) for fair ranking papers. There are ranking specific datasets that people (including myself) have painstakingly collected and annotated, that actually have a ranked utility score.
Authors are really out here writing Fair ML papers and then not giving a fuck about reproducibility or actual applications of their algorithms.
Goes on to show how our community is filled with SOTA chasing interlopers who treat fairness as another optimization problem 😠
Kinda unfair that companies take work freely from academic conferences, make things for profit, and then don't open source them "due to the competitive landscape". At this rate if universities don't match tech salaries, opensource innovation will suffer and eventually decline.
CVPR registration is $1080? Someone needs to invent a conference registration rewards credit card because this is a missed opportunity on racking up points more than the 1% default
Having my American boyfriend tell Alexa to play specific Bollywood songs with his accent distortions because Alexa doesn’t understand my CORRECT pronunciations of Hindi words is so embarrassing
@dieworkwear
Omg because of your quote tweet I found out that a guy I dated in 2020 follows this account and I went down a rabbit hole and discovered he’s a libertarian 💀 what an evening
Once again wondering how the "what is to say GPT 5 or 6 won't kill us" crowd managed to amass so much political power. Fear sells, I guess. As a STEM Computer Scientist I am yet to be convinced that an autoregressive language model will kill us.
Mitigating the risk of extinction from AI should be a global priority.
And Europe should lead the way, building a new global AI framework built on three pillars: guardrails, governance and guiding innovation ↓
I was quoted in the
@guardian
! I spoke to
@AvaSasani
about how AI policy-wise, we should be thinking hard about regulating use cases instead of hindering development. LLMs do not need to be making hiring decisions, but are great to use as doc summarizers.
A continuing pattern of companies first lobbying for less regulation, and then killing internal criticism. Architecture building is an order of magnitude easier than reverse engineering and critical work, and I hope companies will pay the price at some point for these decisions.
I think a lot of my "PhD imposter syndrome" went away after I realized that everyone around me, including senior researchers, were REALLY good at what they do but also had shortcomings in other areas, and that's why they collaborate.
Remind yourself, nobody knows everything! :)
I’m not a NatSec person but instead of worrying about how China publishes more AI papers than the US (why is that bad? Read those papers?) and how talent is getting better in other countries, the US should maybe think about making it easier for good talent to come and stay here🤗
@TaliaRinger
It's so absurd that it's actually funny. I know there are some obvious differences (for instance, in India, if people go to a restaurant for someone's birthday, the person who is being celebrated pays, while here everyone else pays for the birthday person), but this was weird af.
Speaking of bias in image generation though, here’s a quick analysis I did to see what happens when you get stable diffusion to try to generate images of Indians. Turns out it replicates injustices in the India specific domain (caste/skin tone) as well.
I also asked several people if they would be comfortable with a scenario where followup questions were redirected to a chatbot instead of the physician and everyone said no. What reality are these people living in my God. DO NOT USE GENERATIVE MODELS FOR INFORMATION RETRIEVAL.
So I had my very first "so I was reading your paper and I was wondering what you think about this idea" (complete with highlights and margin notes on my paper) from an undergrad in my class and it made me inordinately happy.
#research
#littlethings
Why do we keep attributing this sense of omniscience and neutrality to models when they are just an embodiment of the values and choices of the creator? Absolutely despicable. Hype bros pushing models as image and text search engines should assume responsibility for this mess.
Absolutely delighted to announce that my paper: "Subverting Fair Image Search with Generative Adversarial Perturbations", co-authored with Matthew Jagielski and
@bowlinearl
was accepted to
@FAccTConference
! 🥳 So much work went into this paper I could cry 🥲. Thread soon!
By some confluence of factors I have been invited to the Khoury College commencement both as a PhD receiver and as a college faculty, so this will be me in Summer 2024
ChatGPT + other models are storing prompts from users, so if you share something truly personal about yourself hoping to get a clever response, openAI can use it to show you targeted ads or worse. Regulators everywhere should be concerned. h/t
@eddie_monie
Legit question: Why is Biryani not as famous/popular in the west as compared to, say, Tikka Masala? Like, Biryani is clearly better but so few places out here do it well. In most South Asian restaurants, "biryani" is just yellow rice with bland chicken cubes 🥲
What I don’t understand is - why is a certain group of people completely set on establishing GPT-4 as AGI (it’s not) and go so far as to write articles about robot rights? Like we can argue about sentience but why do you want a tool/device to have rights anyway?