Stefan Savage (parody) -- @savage@infosec.exchange Profile
Stefan Savage (parody) -- @[email protected]

@TheSavageInMan

Followers
666
Following
314
Media
13
Statuses
1K

@[email protected]

Joined July 2015
Don't wanna be here? Send us removal request.
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
9 months
UCSD CSE is hiring tenure-track faculty in all CS subfields at both the assistant and associate/full ranks. Please apply! . Assistant: Associate/Full: 
3
41
119
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
1 year
RT @sumanthvrao: Checkout our (upcoming) paper @TheWebConf 2024. We test if the loose coupling b/w email filtering service.(ex. Proofpoint)….
0
3
0
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
RT @swsissec: #7: Congratulations 2023-24 SWSIS Scholar @KatherineIz! CSE Master’s student @UCSanDiego researching how to protect users a….
0
4
0
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
RT @ucsd_cse: Happy 35th birthday CSE! That's right - we're celebrating 35 years of Computer Science and Engineering education at @UCSanDie….
0
7
0
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
RT @TheSavageInMan: @JAldrichPL @georgemporter I think a challenge is that we don't know how to define plagiarism in formal terms. For hum….
0
1
0
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
In my cursory thinking about thisI only find two reasonable answers: 1) No use of such tools for text generation in papers, 2) You can use them, but we treat the whole thing as a quote (i.e., with a citation that says "ChatGPT generated this text. prompt available on request").
2
0
7
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
Another interpretation is that authors accept responsibility for ensuring that generated text isn't plagiarized. This seems unworkable. How would you know? These systems can substitute words and modify grammar from source text (if a human did that, we'd call it plagiarism).
2
0
7
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
One interpretation is that the second property must be proven beyond reasonable doubt before you can use such systems. Clearly that means that you can't use any of these existing LLM tools because they can all be coerced into generating text that we would call plagiarism.
1
0
6
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
One, you can use such tools so long as they are disclosed. Two, these systems must not plagiarize content (it says more, but I think this is the conflict).
1
0
6
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
ACM has a new draft authorship policy that, among other things, tries to thread the needle on the use of generative AI tools in paper authorship. I don't think this is a needle that can be threaded. The ACM policy has two points that I think are in conflict.
2
2
18
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
Looking back, I think Eddie doing HotCRP was the last time our community did something that made a clear, consistent and compelling positive difference in the review process (even if only in overhead). I'm sure others will disagree and I'll be happy to be wrong.
3
0
19
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
Maybe the other stuff helps in some way, but. maybe it doesn't. There is no doubt that we have ongoing issues of scale, but it's not clear to me that there are independent issues with the review process that we have a handle on how to fix.
1
1
2
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
My take continues to be that pretty much all of these PC-based peer review systems are going to mostly agree about the 5-10% "top" papers and about the 20% "bottom" papers. and then will do something significantly more random with the middle 70% (where most of us live).
1
1
15
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
That said, after doing this for 30 years now, I don't think it's really gotten significantly better than when I started (i.e., reviews more constructive, PC overhead lower, more agreement with outcomes, amount that random person in the community complains about the process, etc.).
2
0
2
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
I remember feeling that way myself and frankly, it's comforting that people feel that way -- it suggests a deep wellspring of optimism :-).
1
0
2
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
Every generation believes that they can fix that crap the last generation left them and every generation believes that things are fixable (i.e., that we can get significantly closer to some unstated Platonic ideal of conference paper reviewing).
1
0
2
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
People can get very worked up about this stuff and thus details like how long the rebuttal is allowed to be and whether such a limit is enforced or not can suddenly become the issue of the day. Now, I don't blame folks for feeling that way.
1
0
1
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
different incentives and consensus mechanisms for reviewers, PC meeting summaries, and countless other experiments -- all in service to improving how things work and, hopefully, outcomes.
1
0
1
@TheSavageInMan
Stefan Savage (parody) -- @[email protected]
2 years
various external ethical review systems, extremely large and broad PCs, heavy PCs, lite PCs, different kinds of paper rankings, different kinds of review assignments, incentives to provide data, incentives to provide working code, different kinds of conflict checking,.
1
0
2