menu Home chevron_right
SCIENCE

Metas AI Boss Says He DONE With LLMS…

TheAIGRID | July 31, 2025



Join my AI Academy – https://www.skool.com/postagiprepardness
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website – https://theaigrid.com/

Links From Todays Video:
https://www.youtube.com/watch?v=eyrDM3A_YFc&t=1810s

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries) contact@theaigrid.com

Music Used

LEMMiNO – Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO – Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

Written by TheAIGRID

Comments

This post currently has 47 comments.

  1. @AcidGubba

    July 31, 2025 at 2:53 pm

    There isn't a single llm model that generates revenue for the company now. It's like the com bubble. Claude expects to break even in three years,while chatgpt expects it to break even in five years, and they say they need to multiply revenue by 12 to reach their goal. It's so funny how people speculate about the future instead of actually looking at the facts. Then maybe people would better understand why llm models hallucinate, and why it's not a flaw in the AI, but a logical consequence of how an llm works.

  2. @jytou

    July 31, 2025 at 2:53 pm

    LLMs are just trained to predict the next word with statistics on lange corpuses. End of story. There is no ”reasoning”, no ”understanding”, no ”logic” embedded in them, whatsoever. Try to make them do something that could be inferred from their training data but that wasn't explicitly in the training data, they fail miserably.

    For instance, ask them to play shogi. They can spit you the rules of the game in a second. But they are unable to play because they can't do ”logic”, ”strategy” or even ”tactic” unless trained on that particular tactic and you’re lucky enough that there is not much variation between your question and what they were trained on. Ask them to encode any number using the major system. They can spit out the rules but are totally unable to make any sense of them to apply them.

    That’s not intelligence, that’s just a very elaborate statistical machine. When anyone mentions AGI when speaking about LLMs, I’m laughing out loud.

    For AGI, we need totally different beasts: systems that model the world and can infer logical rules from experience based on some goal and reward. Those systems have existed for a long time and we’re still quite far off modelling the whole world, but some of them are evolving rapidly and they might come to be usable faster than we guess. At first, we might couple them with LLMs to make them interact with humans, but sooner or later they won't need LLMs at all as they will learn language by themselves as they will do for everything else. Langage is a tough one, hence maybe this intermediate step with LLMs, but that won't last.

  3. @GoelWCS

    July 31, 2025 at 2:53 pm

    Say it shortly : LLMs are limited by their data. So 2 paths for AI improvement : build latent spaces ex nihilo (eg. From rules, rather than reinforcement learning) and/or build data following rules.
    À third path would be to get rid of tokens…

  4. @adamschneider868

    July 31, 2025 at 2:53 pm

    I believe we have AGI, if only it had persistent memory, and could learn and adjust it's weights and knowledge base day to day.

    However I think Mixture of Experts will be how this plays out.

    one Expert is the LLM, and it could even be made up of it's own mixture of experts.
    Then Visual, and Physical.

    It probably needs to be sub-divided more than that.

  5. @sdwone

    July 31, 2025 at 2:53 pm

    We've seen this story before haven't we???

    DotCom Bubble
    Sub Prime Mortgages
    Big Data
    Metaverse
    NFTs
    Crypto
    Tulips

    And that's just off the top of my head!

    LLMs are useful… That's undisputed! But actually intelligent or sentient? You must be joking!!! Time to put this BS hype to bed people… And get your asses back to work!!! Because AGI and them robots won't be coming anytime soon…. If ever!!! 😁

  6. @lmenascojr

    July 31, 2025 at 2:53 pm

    The thing that all these models miss in mastering real world modeling is the reason living beings master it with ease: the body they were born in, it’s loaded with sensors the majority that ISN’T visual! Long before we are able to interpret visual cues, we experience inertia, temperature, substance and their “world model” through these sensors as we develop. Try to do it visually alone is like training a paraplegic baby born that way who can’t even move a muscle or feel them move. Long before we come out of the womb we are developing a sense of the real world through our developing bodies. AI needs a real physical body with fine enough sensory feedback to allow it to experience the real world. If you have any doubt about it, walk with your eyes close, then ask a robot to do the same thing. But by the same reasoning, I believe that at this point LLMs are more than capable of becoming sentient, much like a blind paraplegic could still reason.

  7. @StrategicCIS

    July 31, 2025 at 2:53 pm

    What he's saying is that he's not excited about finite word calculators anymore. But I don't get how imagining something that requires his verbal direction and your vocabulary doesn't have anything to do with language.

  8. @Reg-u1r

    July 31, 2025 at 2:53 pm

    I mean, LeCunn didn't invent LLMs, nor did he contribute anything significant to their progress, he's probably correct that they're a dead end as far as practical complex learning goes, but he doesn't have any better ideas on a concrete architecture or methods that can beat transformers at anything either. What exactly does he have to offer?

    Also, the fact that LeCunn's knowledge of neuroscience and cognition seems to stop at the mostly-debunked pop psych book by an economist that popularized the over-simple and not predictive 'system 1 and 2' four humors style model of cognition… yeah… even if he ends up being right, it will be for the wrong reasons, he seems way out of his depth.

  9. @Grouiiiiik

    July 31, 2025 at 2:53 pm

    The comments in this video are the display of the wise man pointing at the moon and the idiot looking at the finger.
    And I think Lecun is right, I always argued that the tokenization is the problem when discussing it with friends and colleagues. LLMs are good guessers but one cannot survive only with guessing.
    We need models that can sense the real world (touch, flavor, smell, heat, pain, etc…) to make up a representation of the world we human see. I don't think the video model he talks about will be that great though as it will have some similar limitations.
    They need physical feedback data to build this "world" he talks about. I'd argue that a model that cannot see and cannot read could be way more useful than LLMs.
    Recognizing patterns that lead to specitic body transformations (leading to a better or worse outcome for the host) seems way more exciting.

  10. @miramar-103

    July 31, 2025 at 2:53 pm

    He sees above the current hype … and I agree with him … also with Sir R Penrose… that AI is 'mis-named'… this is not 'Intelligence'.. as that requires 'Conciousness' … which silicon chips and non-biological systems have no idea about….

  11. @KevinHanna

    July 31, 2025 at 2:53 pm

    He's not concerned that LLMs are unable to predict the physical world. He's saying the physical world is unpredictable and transformers are wasting much if their resources trying to do that. Do the impossible.

  12. @omer1996d2

    July 31, 2025 at 2:53 pm

    Forget Meta AI boss. The guy's name pops up constantly in college level computer vision courses. He is arguably one of the fathers of modern object detection models. His name keeps popping up in various major leaps forward in computer vision from the last 20 years, and computer vision is one of the fields that is truly forever changed by nueral networks as they are way better in extracting low level features than any human made solution can be.

  13. @vorpled

    July 31, 2025 at 2:53 pm

    Reasoning LLMs can do some form of abstract reasoning about the visual, physical world.

    I asked one to recommend particular origami-type folds that I could use similar to turkish map folds that I could use to distort an image printed on a sheet of paper while still being able to see the entire image after folding.

    It went through a large list of different folds and was able to rule out the ones that would hide parts of the image when folded – without ever having seen, held or folded a piece of paper.

    The list it gave me after "thinking" about it was pretty spot on, and even gave tips about particular techniques to use to make sure more of the image remained visible.

  14. @vorpled

    July 31, 2025 at 2:53 pm

    Whatever it is has to have persistent, long-term memory to be truly useful to individual humans. It needs to remember the needs, choices, biases,
    judgement, etc of each individual person that it interacts with to be a tool that works well for each of those people and is able to achieve both it's goals and the goals of the people it interacts with.

  15. @vorpled

    July 31, 2025 at 2:53 pm

    Sure. But that's like saying "Your bicycle is not going to win the Indy 500". That's not what bicycles are for. Did anyone think LLMs were for reaching AGI?

  16. @Ayvengo21

    July 31, 2025 at 2:53 pm

    Here is the thing i question myself. Why if we have such a greate progress in AI and all that we still send people to mines instead of robots/drones. How could it be cheaper? Just pumping water and ventilation so the people could stay alive undeground cost a fortune. Yet all our AI progress afect jobs that doesn't have any risc of death, injury or a prolong health damage instead of some jobs that actualy have all those issues.

  17. @PedroZola-k1m

    July 31, 2025 at 2:53 pm

    I agree with the notion of trying new approaches to train LLMs on, but the idea that a training model similar to that of a human's is a little anthropocentric in a way? Sure, it'd possibly interact with humans in a way that seems more logical to us but as we tread towards more advanced models nothing dictates that our learning method is the best as the complexity grows.

  18. @kartikpodugu

    July 31, 2025 at 2:53 pm

    didn't we observe something called as emergent properties in LLMs, that they start performing well at tasks that they aren't trained on, and eventually perform zero shot. Doesn't that mean they are ready for System 2 tasks ? There may be better architectures for doing System 2, but saying LLMs can't do I think is exaggeration. Also, he keeps saying text again and again, and its not true according to me. VITs are trained on patches of images, they use transformer architecture, but they don't use tokens and text. I think he wants people to stop obsessing about LLMs, agree with that, but confining LLMs to only text is not right I think so.

Comments are closed.




This area can contain widgets, menus, shortcodes and custom content. You can manage it from the Customizer, in the Second layer section.

 

 

 

  • play_circle_filled

    92.9 : The Torch

  • play_circle_filled

    AGGRO
    'Til Deaf Do Us Part...

  • play_circle_filled

    SLACK!
    The Music That Made Gen-X

  • play_circle_filled

    KUDZU
    The Northwoods' Alt-Country & Americana

  • play_circle_filled

    BOOZHOO
    Indigenous Radio

  • play_circle_filled

    THE FLOW
    The Northwoods' Hip Hop and R&B

play_arrow skip_previous skip_next volume_down
playlist_play