
Aaron J. Snoswell, PhD
@aaronsnoswell
Followers
922
Following
5K
Media
384
Statuses
5K
Research fellow in computational law @ADMSCentre. Other half of @CSnoswell. Tweets about robotics, machine learning and game development (薛嘉伦).
Brisbane, Australia
Joined June 2009
RT @ConversationEDU: Generative AI models are like auto-complete on steroids. Bots learned to converse by reading text scraped from interne….
theconversation.com
Generative AI models are like auto-complete on steroids. Bots learned to converse by reading text scraped from internet sites – and they’re not always accurate.
0
3
0
RT @RandomSamplePOD: Our latest episode is out! We explore the new reality of #AI & how it's already impacting legal practice & policy. It'….
0
3
0
RT @QUTDataScience: A team from our Centre just submitted a response to the Australian Government's @IndustryGovAu Proposal Paper on introd….
0
4
0
RT @ConversationEDU: The 2024 Physics Nobel laureates developed computer systems that can memorise and learn from patterns in data. Aaron J….
theconversation.com
John Hopfield and Geoffrey Hinton, the 2024 Physics Nobel laureates, developed computer systems that can memorise and learn from patterns in data.
0
2
0
My colleagues Patrik and Jairu: ‘Side job, self-employed, high-paid’: behind the AI slop flooding TikTok and Facebook via @ConversationEDU @qutdmrc.
theconversation.com
In places like India, Vietnam and China, churning out weird AI videos is the latest side hustle for students and stay-at-home mothers.
0
0
0
RT @IR_oldie: Great seeing @aaronsnoswell's article on Model Collapse getting reposted on the ABC. It's a great read Aaron! .
abc.net.au
Generative AI needs tons of data to learn. It also generates new data. So, what happens when AI starts training on AI-made content?
0
3
0
New Conversation article - my 2c on Model Collapse. With the @QUT GenAI lab, @qutdmrc, @QUTDataScience, and @AdmsCentre!.
Could artificial intelligence get. well. less intelligent?. Plenty of prophets and newsmongers are getting breathless with talk of an impending catastrophic "model collapse". 📝@aaronsnoswell @QUT
0
4
6
To train GPT-3, OpenAI needed over 650 billion English words of text – about 200x more than the entire English Wikipedia. But this required collecting almost 100x more raw data from the internet, up to 98% of which was then filtered and discarded 🤯
arxiv.org
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic...
4
0
6
It was a pleasure to attend @QUT's first ever Faith and Spirituality Research Symposium at the beautiful Garden's Point campus recently
0
0
2
I'm hiring! Come do an awesome PhD in Value Alignment with me in sunny Brisbane, Australia at the @QUT Generative AI lab, in collaboration with @qutdmrc, @QUTDataScience, and @AdmsCentre!
0
6
15
RT @QUTDataScience: The leaders of our Centre's Responsible #DataScience & #AI program laying out their plans for 2024. But it's not just f….
0
3
0
RT @natolambert: Honestly, the least flashy stuff in the AI news cycle but stuff that is incredibly important. Lots of leading orgs includi….
0
7
0