Metas AI Boss Says He DONE With LLMS…
Join my AI Academy – https://www.skool.com/postagiprepardness
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website – https://theaigrid.com/
Links From Todays Video:
https://www.youtube.com/watch?v=eyrDM3A_YFc&t=1810s
Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
Was there anything i missed?
(For Business Enquiries) contact@theaigrid.com
Music Used
LEMMiNO – Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO – Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s
#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

@ValueInvestingRevisited
July 31, 2025 at 2:53 pm
These guys are working on ending humanity, and they are passionate about it.. wow!!
@AcidGubba
July 31, 2025 at 2:53 pm
There isn't a single llm model that generates revenue for the company now. It's like the com bubble. Claude expects to break even in three years,while chatgpt expects it to break even in five years, and they say they need to multiply revenue by 12 to reach their goal. It's so funny how people speculate about the future instead of actually looking at the facts. Then maybe people would better understand why llm models hallucinate, and why it's not a flaw in the AI, but a logical consequence of how an llm works.
@sethatron100
July 31, 2025 at 2:53 pm
Use LISP and you half way there
@dantezco
July 31, 2025 at 2:53 pm
Of course not. They must be spending a ton of money/time/resources for no return.
@isaacbernardocaicedocastro4835
July 31, 2025 at 2:53 pm
So am I 😂
@lotyogipityu7992
July 31, 2025 at 2:53 pm
Meta AI fails at translation, the guy is a clown.
@EzDoesntExist
July 31, 2025 at 2:53 pm
"I am excited in things that current tech community may get excited about 5 years from now"
Massive
@jytou
July 31, 2025 at 2:53 pm
LLMs are just trained to predict the next word with statistics on lange corpuses. End of story. There is no ”reasoning”, no ”understanding”, no ”logic” embedded in them, whatsoever. Try to make them do something that could be inferred from their training data but that wasn't explicitly in the training data, they fail miserably.
For instance, ask them to play shogi. They can spit you the rules of the game in a second. But they are unable to play because they can't do ”logic”, ”strategy” or even ”tactic” unless trained on that particular tactic and you’re lucky enough that there is not much variation between your question and what they were trained on. Ask them to encode any number using the major system. They can spit out the rules but are totally unable to make any sense of them to apply them.
That’s not intelligence, that’s just a very elaborate statistical machine. When anyone mentions AGI when speaking about LLMs, I’m laughing out loud.
For AGI, we need totally different beasts: systems that model the world and can infer logical rules from experience based on some goal and reward. Those systems have existed for a long time and we’re still quite far off modelling the whole world, but some of them are evolving rapidly and they might come to be usable faster than we guess. At first, we might couple them with LLMs to make them interact with humans, but sooner or later they won't need LLMs at all as they will learn language by themselves as they will do for everything else. Langage is a tough one, hence maybe this intermediate step with LLMs, but that won't last.
@GoelWCS
July 31, 2025 at 2:53 pm
Say it shortly : LLMs are limited by their data. So 2 paths for AI improvement : build latent spaces ex nihilo (eg. From rules, rather than reinforcement learning) and/or build data following rules.
À third path would be to get rid of tokens…
@adamschneider868
July 31, 2025 at 2:53 pm
I believe we have AGI, if only it had persistent memory, and could learn and adjust it's weights and knowledge base day to day.
However I think Mixture of Experts will be how this plays out.
one Expert is the LLM, and it could even be made up of it's own mixture of experts.
Then Visual, and Physical.
It probably needs to be sub-divided more than that.
@sdwone
July 31, 2025 at 2:53 pm
We've seen this story before haven't we???
DotCom Bubble
Sub Prime Mortgages
Big Data
Metaverse
NFTs
Crypto
Tulips
And that's just off the top of my head!
LLMs are useful… That's undisputed! But actually intelligent or sentient? You must be joking!!! Time to put this BS hype to bed people… And get your asses back to work!!! Because AGI and them robots won't be coming anytime soon…. If ever!!! 😁
@lmenascojr
July 31, 2025 at 2:53 pm
The thing that all these models miss in mastering real world modeling is the reason living beings master it with ease: the body they were born in, it’s loaded with sensors the majority that ISN’T visual! Long before we are able to interpret visual cues, we experience inertia, temperature, substance and their “world model” through these sensors as we develop. Try to do it visually alone is like training a paraplegic baby born that way who can’t even move a muscle or feel them move. Long before we come out of the womb we are developing a sense of the real world through our developing bodies. AI needs a real physical body with fine enough sensory feedback to allow it to experience the real world. If you have any doubt about it, walk with your eyes close, then ask a robot to do the same thing. But by the same reasoning, I believe that at this point LLMs are more than capable of becoming sentient, much like a blind paraplegic could still reason.
@lelabia
July 31, 2025 at 2:53 pm
Tellement VRAI 🙂
@spokoman23
July 31, 2025 at 2:53 pm
Ah, so this is the guy who brings us Skynet… Great
@StrategicCIS
July 31, 2025 at 2:53 pm
What he's saying is that he's not excited about finite word calculators anymore. But I don't get how imagining something that requires his verbal direction and your vocabulary doesn't have anything to do with language.
@HerbieBancock
July 31, 2025 at 2:53 pm
From the people who brought you NFTs.
@Reg-u1r
July 31, 2025 at 2:53 pm
I mean, LeCunn didn't invent LLMs, nor did he contribute anything significant to their progress, he's probably correct that they're a dead end as far as practical complex learning goes, but he doesn't have any better ideas on a concrete architecture or methods that can beat transformers at anything either. What exactly does he have to offer?
Also, the fact that LeCunn's knowledge of neuroscience and cognition seems to stop at the mostly-debunked pop psych book by an economist that popularized the over-simple and not predictive 'system 1 and 2' four humors style model of cognition… yeah… even if he ends up being right, it will be for the wrong reasons, he seems way out of his depth.
@yunleung2631
July 31, 2025 at 2:53 pm
the guy who invented CNNs did that?
@seancooper5007
July 31, 2025 at 2:53 pm
Hear, hear
@manoelramon8283
July 31, 2025 at 2:53 pm
I would love to see he is saying "I am not interested in Leetcode anymore"
@abdough6549
July 31, 2025 at 2:53 pm
still a french socialist who like bureaucracy
@c2454
July 31, 2025 at 2:53 pm
Why doesn't he invent something better then?
@Grouiiiiik
July 31, 2025 at 2:53 pm
The comments in this video are the display of the wise man pointing at the moon and the idiot looking at the finger.
And I think Lecun is right, I always argued that the tokenization is the problem when discussing it with friends and colleagues. LLMs are good guessers but one cannot survive only with guessing.
We need models that can sense the real world (touch, flavor, smell, heat, pain, etc…) to make up a representation of the world we human see. I don't think the video model he talks about will be that great though as it will have some similar limitations.
They need physical feedback data to build this "world" he talks about. I'd argue that a model that cannot see and cannot read could be way more useful than LLMs.
Recognizing patterns that lead to specitic body transformations (leading to a better or worse outcome for the host) seems way more exciting.
@miramar-103
July 31, 2025 at 2:53 pm
He sees above the current hype … and I agree with him … also with Sir R Penrose… that AI is 'mis-named'… this is not 'Intelligence'.. as that requires 'Conciousness' … which silicon chips and non-biological systems have no idea about….
@SCharlesDennicon
July 31, 2025 at 2:53 pm
If a dude behind one of the most successful LLMs said that, I would listen more carefully.
@umalt301
July 31, 2025 at 2:53 pm
私はもう法学修士に興味はありません。。
法学修士?😂
@aubreyxengland
July 31, 2025 at 2:53 pm
LLM’s are so over.
@KevinHanna
July 31, 2025 at 2:53 pm
He's not concerned that LLMs are unable to predict the physical world. He's saying the physical world is unpredictable and transformers are wasting much if their resources trying to do that. Do the impossible.
@omer1996d2
July 31, 2025 at 2:53 pm
Forget Meta AI boss. The guy's name pops up constantly in college level computer vision courses. He is arguably one of the fathers of modern object detection models. His name keeps popping up in various major leaps forward in computer vision from the last 20 years, and computer vision is one of the fields that is truly forever changed by nueral networks as they are way better in extracting low level features than any human made solution can be.
@b3ng3b
July 31, 2025 at 2:53 pm
This guy, along with zuck, is responsible for bringing ai to china. This should be further investigated.
@ivaniliev93
July 31, 2025 at 2:53 pm
more money for Nvda I guarantee you that
@vishalk9788
July 31, 2025 at 2:53 pm
Calling LLM as intelligence itself defeats the definition ans purpose of AI
@dragonsandfliesandpotatoes
July 31, 2025 at 2:53 pm
All LLMs do is generate fake content and plagiarism. That might be useful for some, but the end result is bad.. very bad
@vorpled
July 31, 2025 at 2:53 pm
Reasoning LLMs can do some form of abstract reasoning about the visual, physical world.
I asked one to recommend particular origami-type folds that I could use similar to turkish map folds that I could use to distort an image printed on a sheet of paper while still being able to see the entire image after folding.
It went through a large list of different folds and was able to rule out the ones that would hide parts of the image when folded – without ever having seen, held or folded a piece of paper.
The list it gave me after "thinking" about it was pretty spot on, and even gave tips about particular techniques to use to make sure more of the image remained visible.
@vorpled
July 31, 2025 at 2:53 pm
Whatever it is has to have persistent, long-term memory to be truly useful to individual humans. It needs to remember the needs, choices, biases,
judgement, etc of each individual person that it interacts with to be a tool that works well for each of those people and is able to achieve both it's goals and the goals of the people it interacts with.
@vorpled
July 31, 2025 at 2:53 pm
Sure. But that's like saying "Your bicycle is not going to win the Indy 500". That's not what bicycles are for. Did anyone think LLMs were for reaching AGI?
@Ayvengo21
July 31, 2025 at 2:53 pm
Here is the thing i question myself. Why if we have such a greate progress in AI and all that we still send people to mines instead of robots/drones. How could it be cheaper? Just pumping water and ventilation so the people could stay alive undeground cost a fortune. Yet all our AI progress afect jobs that doesn't have any risc of death, injury or a prolong health damage instead of some jobs that actualy have all those issues.
@infinitelog
July 31, 2025 at 2:53 pm
12:06 I really want to see those error rates on a bollywood movie now lol
@JohnBStephensJr
July 31, 2025 at 2:53 pm
And he works for Facebook. Creating sex chat bots.
@PedroZola-k1m
July 31, 2025 at 2:53 pm
I agree with the notion of trying new approaches to train LLMs on, but the idea that a training model similar to that of a human's is a little anthropocentric in a way? Sure, it'd possibly interact with humans in a way that seems more logical to us but as we tread towards more advanced models nothing dictates that our learning method is the best as the complexity grows.
@Benoit-l5v
July 31, 2025 at 2:53 pm
Llama is honestly very far from OpenAI and Anthropic in terms of quality, so I would take this with a grain of salt
@dvs6121
July 31, 2025 at 2:53 pm
SERIOUS QUESTION: 6:39 what graphical tool makes that animation?? It's like swimming through a graph…..
@FourOhFour-d5v
July 31, 2025 at 2:53 pm
At what point do we introduce stereoscopic images in training models. There has to be value there.
@kartikpodugu
July 31, 2025 at 2:53 pm
didn't we observe something called as emergent properties in LLMs, that they start performing well at tasks that they aren't trained on, and eventually perform zero shot. Doesn't that mean they are ready for System 2 tasks ? There may be better architectures for doing System 2, but saying LLMs can't do I think is exaggeration. Also, he keeps saying text again and again, and its not true according to me. VITs are trained on patches of images, they use transformer architecture, but they don't use tokens and text. I think he wants people to stop obsessing about LLMs, agree with that, but confining LLMs to only text is not right I think so.
@humble_integrity
July 31, 2025 at 2:53 pm
Hes also not interested in qpu
@krustserver
July 31, 2025 at 2:53 pm
There are two goups of people in the comments: pople who are software engineers and those who aren't and it shows 😂
@terentij5002
July 31, 2025 at 2:53 pm
Abstract thinking and language are the foundation of civilization, in contrast to the animal kingdom.
Comments are closed.