@jordanbpeterson
@elonmusk
LLM’s are language learning models that just predict the next word, eventually the sentence
over the period of time and based on feedback they may get intelligence on how to predict it correctly
LLM’s hallucinate by default, its just that they need yo do it correctly & often
Excuse me,
@elonmusk
: is there any way of modifying Grok so that it does not provide scientific references that do not in fact exists? It is just as prone as ChatGPT to do so (maybe not quite as bad).
I am using it constantly as a research source. Whatever references it…
@VlVEK
@jordanbpeterson
@elonmusk
I’ve had to really train my models and work through the kinks too; one can get there but it requires constant input, correction/feedback, try again. It’s still early in the game. And expanded libraries/source data would help!
@VlVEK
@jordanbpeterson
@elonmusk
Agreed, just getting mine to stick to the character of bender from Futurama and reply with some awesome robot sass was quite hard. I had to refine the prompts multiple times to get it to do it properly and it will still break character in some instances.
@VlVEK
@jordanbpeterson
@elonmusk
Yes, the state of things is so dire that they're starting to call plain old programming "artificial intelligence". LMM is going nowhere, but provides some tools we should have had 20 years ago.
TLDR: crecheAI will be an artificial intelligence, but it will Not become sentient.