The job of beneficial AGI is to make a plan, a plan for every human and their every goal.
The job of safe AGI is to ensure the plan is scrutable, debatable and modifiable.
Until it's ready to be executed by someone or something.
Chollet with a master understanding of understanding and intelligence.
Remember his proposed benchmark ARC came well before the GPT hype and has stood the test of time.
To understand X means you have the ability to act appropriately in response to situations related to X -- for instance, you understand how to make coffee in a kitchen if you can walk into a random kitchen and make coffee.
I am intrigued by Q* too. Other than name, here are the provided clues. Let's assume truth.
1. It was able to solve math problems that it hadn’t seen before (kinda breakthrough)
2. It allowed OpenAI to overcome limitations on obtaining high-quality data to train new models
1/n
Intriguing question: can humans theoretically create a machine that can reason, learn & understand like humans, without humans first understanding how they reason, learn & understand themselves?
@ShlomoArgamon
@GaryMarcus
@Grady_Booch
@danieldennett
@jmj
Probably, a social network that prepends this to every post
"I am not sure but I think/feel .."
And perhaps, appends this too.
"Happy to chat & learn better."
Combining the mysterious magic of deep learning with symbolic search (aka neuro-symbolic or hybrid AI models) is something that
@GaryMarcus
,
@Grady_Booch
, others have predicted for a long time.
Q* & GB's work hints at it as the future of large models.
Here's my conversation with Gary Marcus (
@GaryMarcus
) about the future of AI systems that may achieve common sense reasoning with a hybrid of deep learning and symbolic AI.
@zachtratar
If there is something, here are the clues:
Nov 1: SamA says LLMs are not enough. We need big breakthrough for AGi.
Nov 6: SamA “everything today will seem quaint next year”
Nov 16: SamA “4 times pushed the frontier of discovery, last one within last couple of weeks”
Nov 17:
@garybasin
rumour is that it's intentionally nerfed for reasons (possibly cost, time, doomerism/regulation, also forcing oai hand to show some cards soon)
Personal Bet: The next AI breakthrough will come by focusing on the right questions, not the right answer
AI today mostly is focused on predicting/recommending the right product, song, video, search link, social connection self-driving action, investment decision
More to come👇
@sterlingcrispin
@peterthiel
Could you help by actually articulating the business or human problem that could be solved with even one of the data set?
That will help Ai founders and techies rally around it.
@fchollet
Nothing at all! It's a great language for data science/AI that we use as first choice in our product. I would have associated such deep insights with a book named "Deep Deep Learning" vs a practitioners book
@PedanticRomantc
I see why you’d feel that way initially. Here, one could 1) choose never to sign up for
@LambdaSchool
or 2) choose to make less than $50k after the program, 3) if you get lucky, just pay off $30k. Not sure how that’s like as atrocious as slavery.
CC
@paulg
@pt
@bresslernation
@mattbilinsky
Isn’t this how?
Target ads (by demo & psychographics) -> UTM Params -> for visitors with utm, add cookie (& account profile If signup) -> retarget/drip campaign for the rest of visitors
Repeat for all subset/target segments. I got user data in my system now.
Deep Emissions with Deep Learning!
Clearly, there is an efficiency gap between DL and human cognition. There are better models. “Need a Newton of AI.”
@GaryMarcus
@atShruti
“In the short term, it’s a voting machine” with a lot of voters right now (ex-gamblers, newly bored, newly rich, wannabe rich, people with total faith in the printer)
“In the long term, it’s a weighing machine” (Printer may stop after Nov 3)
Super excited about our new research direction for aligning smarter-than-human AI:
We finetune large models to generalize from weak supervision—using small models instead of humans as weak supervisors.
Check out our new paper:
Excellent debate initiated by
@GaryMarcus
What you just did is exactly what GPT3 can not do without "understanding", i.e identify gaps in its own knowledge:
1) ask questions to fill those gaps, OR
2) say "I screwed up" with any conviction
It matters else you can't rely on it
Interesting debate on what it means for GPT3 to have a model of the world and whether that means it actually understands it. And whether any of that matters.
we're starting to see top companies spend the same amount on RLHF and compute in training ChatGPT-like LLMs
for example, OpenAI hired >1000 devs to RLHF their code models
crazy—but soon companies will start spending $ hundreds of Ms or $ billions on RLHF, just as w/compute
Next-step prediction is beautiful because it encourages, as a model gets extremely good, learning the underlying process that produced that data.
That is, if a model can predict what comes next super well, it must be close to having discovered the "underlying truth" of its data.
@SparklinPM
@paulg
People want:
Sugar & salt ~ McDs
So they quench thirst for info ~ google
And for hoarding stuff ~ amazon
To Contribute back ~ Microsoft
For group acceptance/validation ~ Fb/LI
Yet being different ~ Apple
Today we officially "throw our hat into the ring" and take on the challenge of bringing ethics, transparency and fairness in probably the most crucial human development i.e. artificial intelligence. I am proud to join an incredible team in this journey.
Today, we're proud to launch our official website alongside the AskWhai Manifesto: a document that explains why we believe AI isn't serving the needs of small businesses or their customers, and how we're going to fix it. Read it here:
@fchollet
The discourse is about unintended outcomes of intentional objectives. Autonomy is neither on or off. I blv Adept and Inflection are just two examples of semi-autonomous systems, as well as all the robotic and self-driving companies
@sarah_cone
Funding Pullback by nsf, dod and darpa for fundamental, long-term technologies over the last 20-30 years.
Overindexing by VCs on safe and familiar bets such as b2b SaaS, and social media apps. US is already behind China on real-time AI, & risks falling behind on chip production
@schlaf
@jack
Launch a venture fund for startups built on twitter’s social/interest graph.
And secretly fund companies that can improve the social, economic and climate outcomes for everyone
@andrewchen
Just came to Bay Area from Chicago to:
1. Escape winter and get warmer weather/ outdoors for the toddler (trigger)
2. Remote work for my spouse (enabler)
3. Build moon (chasing dream)
Sorry, causality is complex 😊
If VERY fast LLM inference was in your holiday plans, you are very lucky. Multiple new techniques, models and even architectures dropped in the last few days.
Here are some of them.
I think this works. “Organizing the world’s information and making it universally accessible in order to enhance human decision-making 10X and elevate its consciousness.”
@garybasin
Like Turing Test, AGI is being coopted away from the core essence. Soon we will realize it's a bad term. Of course there is a lot of "general" and "intelligence" in GPT-4 but it's not close to biological/human intelligence. Multiple flaws in his arguments..
@satyanutella_
Also he seems to be idealistic about broad outcomes despite quoting Gita/Krishna. Even the most powerful/willful humans had way less control over broader outcomes than expected
@garybasin
That's fair. This is a company that learned by putting billions in self driving with no return so far.
GenAI may turn out to be a money pit. ROI from GPT4->5 is quite speculative, and might be negative for Google given their core business
Time to step back and see what's the deal. LLMs are probabilistic & compressed generators of training data. They do NOT see all inputs/outputs (and permutations) for arbitrary math problems -> hopeless. Can we instead trade inference time?
New YouTube video: 1hr general-audience introduction to Large Language Models
Based on a 30min talk I gave recently; It tries to be non-technical intro, covers mental models for LLM inference, training, finetuning, the emerging LLM OS and LLM Security.
@Plinz
Social scientists have concurred, true happiness is the following:
1. Belong to some club (s)
2. Make unique contributions to them
3. Be appreciated by others members for
#2