AI: Computers and Minds | Philosophy Tube
Are Artificial Intelligence and Computerised Minds really feasible, technologically and philosophically?
More Videos like this: http://tinyurl.com/pycfcfp
Subscribe! http://www.youtube.com/subscription_center?add_user=thephilosophytube
Patreon: http://www.patreon.com/PhilosophyTube
Audible: http://www.audibletrial.com/PhilosophyTube
FAQ: https://www.facebook.com/PhilosophyTube/posts/460163027465168
Facebook: https://www.facebook.com/PhilosophyTube?ref=hl
Twitter: @PhilosophyTube
Email: ollysphilosophychannel@gmail.com
Google+: google.com/+thephilosophytube
realphilosophytube.tumblr.com
AISB:
www.aisb.org.uk
Is the Turing Test any Good? https://www.youtube.com/watch?v=jCb7D6B_vNY
Recommended Reading:
Views into the Chinese Room: http://tinyurl.com/ne3pb2r
John Searle, “Can Computers Think?” in Philosophy of Mind: Classical and Contemporary Readings, edited by David Chalmers
Herbert Dreyfus, What Computers Still Can’t Do: A Critique of Artificial Reason (I recommend the 2004 edition rather than the 1972).
Tim Crane, The Mechanical Mind
Margaret Boden, The Creative Mind
If you or your organisation would like to financially support Philosophy Tube in distributing philosophical knowledge to those who might not otherwise have access to it in exchange for credits on the show, please get in touch!
Music: ‘My Little Medley,’ ‘Digital Leapfrog,’ ‘Chiptune Anthem One,’ ‘Epic Chiptune Thunderdome,’ ‘The Day I Die – Remastered’ by TechnoAxe – http://tinyurl.com/kkrsfgg
Title Animation by Amitai Angor AA VFX – https://www.youtube.com/dvdangor2011
Any copyrighted material should fall under fair use for educational purposes or commentary, but if you are a copyright holder and believe your material has been used unfairly please get in touch with us and we will be happy to discuss it.

@MageJohnClanner
January 8, 2026 at 1:41 am
Man I'd really love to sit down and have a proper debate about this theory, because I think there are some pretty big holes with how Searle is thinking about the Chinese Room. I know it's unlikely that you'll ever read this comment given the age of this video, but the topic fascinates me.
For starters, assume that a human brain is a computer; Turing shows that any computer can simulate any other computer, so what the guy with the rules is doing is actually simulating a /different/ computer. It's that simulated computer that could potentially understand Chinese, not the guy himself. Whether the guy could learn some Chinese by observing the actions of the simulated computer is a completely different question.
@auroreinara7322
January 8, 2026 at 1:41 am
This needs an update with how AI is flooding the world atm!
@rekall76
January 8, 2026 at 1:41 am
a system architecture designed by humans is inherently limited by the constraints imposed by that design, and algorithms run on it will express those limitations… however… a system architecture designed by an algorithm, for the purpose of more efficient and flexible machine-learning, especially by an algorithm that is itself capable of some degree of flexibility… (and then generations of this)… well… that would quite a bit different, i think
@kiancuratolo903
January 8, 2026 at 1:41 am
I fall on Bodens argument along with a feeling of well, if something responds like a sentient being, interprets information like a sentient being, has an internal model or at least in all ways reports to have an internal model of consciousness, and just overall matches 1 to 1 with all qualities of a sentient being…its a sentient being, and the question of 'but it isn't at the most fundamental level' is as bad an argument as 'atoms can't think so people aren't conscious'. Sentience, true intelligence can emerge from any sufficiently complex system. I would highly suggest anyone interested in this to look at Mark Tegmarks video on integrated information theory, a physical theory of consciousness that i think provides some amazing philosophical fodder even if not the ultimate answer.
@antonidamk
January 8, 2026 at 1:41 am
It is really interesting to read a lot of comments here that effectively disprove the theory on the basis of modern scientific advances which have brought AI much closer to the human brain than it was before. However, I think it is worth commenting that even before these advances, the theory was weak because it relied on an assumption that just because at the time it was not possible to model the human brain with an algorithm, that meant it could not in principle be done, whereas there is just as much evidence for that argument as for saying that the algorithm is so complex that we haven't yet worked it out (and may not have the capacity to do so), but it might well be possible. And even if the algorithm is so complex that we could never do it, how close can a computer get before we say it is the same? Individual humans' brains work differently, and we still consider them all human, so why not the same with AI if it is within the same ballpark?
@bassem500
January 8, 2026 at 1:41 am
As a software engineer and a SciFi aficionado, AI is a subject for me. I would like to invite you to think about adding "will" into the mix for AI. Imagine we program a computer with the "will" not to be shut down. Next we give it a set of tools to find the risks which would result in a shutdown and another set of tools to mitigate found risks. Can you imagine that, in an iterative process, this computer will gather knowledge about the risks and develop procedures which make it ever more vigilant. Put another way: I am postulating that if we give a computer, connected to the Internet and with natural language analysis capabilities ( like Siri has ), the "will to live" it will develop self-awareness in the end. Giving a computer a purpose contextualises information and action for it. A fly, with its tiny brain/nerves-system has self-awareness of sorts and, for sure, a will to live… computers we are able to build today have orders of magnitude of calculation power compared to a fly. Even bacteria have a kind of "will to live"… "virus"? We already have those in digital form.
@Kreadus005
January 8, 2026 at 1:41 am
Sure, but its not helpful? I mean dual-aspect monisms and panpsychism could mean that a computer could be thinking even if we have it spitting gibberish. Or the rock over there.
Does anyone recall the episode of Star Trek DS9 called Life Support where the patient just has part of his brain replaced with a computer? One day we're going to get good enough at this sort of thing that we could wire in a computer to our brain or vice versa. At what point are both the silicon brain and the meat brain both thinking? Is one doing most of the thinking? What happens if we get so good at this, that we can crab walk ourselves completely out of our meat brain? Are we still thinking? Or did we lose an aspect along the way? Cyberpunk enthusiasts abound.
This might be one of those truths one could only experience in first-person perspective, and during, like the cogito.
@jjkthebest
January 8, 2026 at 1:41 am
I call bullshit on the idea that our decision-making isn't algorithmic. Just because it's too complex for us to know right now and has some randomness built in doesn't mean there isn't a strict ruleset we use.
Just look at most modern machine learning systems. We only know sort of how they make their decisions and yet they still follow a strict ruleset. It's just too complex for us to really get a proper understanding.
@Isaac-LizardKing
January 8, 2026 at 1:41 am
these arguments are very detached from neuropsychology and computer engineering. the brain exists within the natural world, and thus it follows the rules of cause and effect. everything the brain does can be traced back to the outside world, so the only difference between a computer and a person is the complexity of the rulebook. It just comes down to fully mapping the brain and building a computer to emulate it.
@katylatam7032
January 8, 2026 at 1:41 am
ok but why is he so angry tho
@vitormanfredini1205
January 8, 2026 at 1:41 am
6 years late, but here's something:
If we all came from evolution and natural selection, from molecules to complex creatures over the milenia, then qualia cannot exist. Because rearranging things cannot lead to qualia. Even in very complex systems, like brains, all it's doing is representation. To do something else, it would require qualia being introduced to the natural selection for it to exist.
@arasharfa
January 8, 2026 at 1:41 am
Thinking and feeling are interlinked, the brain is not the only thing that processes information. Our culture is very brain centered, others aren't. We think with our entire bodies. dopamine feels like something, and that feeling helps trigger thoughts that in turn affects the feeling. If we were able to program a computer using all the neurochemicals a human has we would perhaps start to create a computer capable of understanding human nature as we wouldn't have to create representations of the actual thought/feeling. I believe in Qualia. the point where a human starts to act like a computer is when for instance an autistic person (such as I) cognitively has to construct an understanding of something that is innate to neurotypical people. I don't get the experience of certain things other people talk about, but I can understand the gist of them and function accordingly because i've deduced HOW it works but that's not suddenly going to make me experience what comes naturally to other people.
@pfefferfilm
January 8, 2026 at 1:41 am
"[Abbie], Philosophy Ma"
This aged beautifully thnx philosophy mom
@badwolf8112
January 8, 2026 at 1:41 am
turing intentionally tackled the question of whether machines can think from a purely "functional" perspective. he also had a response to the rules thing, we might be operating based on rules. i think it's obvious that we don't understand completely how we function, yet we do it automatically, so "common sense" to us at the conscious level (don't get cookies if they're on fire) is a rule too, just not necessary for us to keep in mind at the high level abstraction of thought. metaphorically (or actually), we have programs/rules that await something to be on fire and then if it is, then we notice the danger.
@kucheelly
January 8, 2026 at 1:41 am
Perhaps I’m misunderstanding what is meant by the terms, but I was confused by you saying that weather is not algorithmic. My understanding of weather is that there are simply too many variables for us to ever get a completely accurate representation (because the equations get exponentially harder with each new variable). However, if we could possibly know every single variable and every input, then we could get an accurate prediction and generate an algorithm that would represent it. I guess I’m just thinking that the natural processes of the universe (weather in this instance) come down to tiny interactions that can be modeled and understood in a way that at least seems to follow an algorithm. Where in my thinking have I gone wrong?
@itsallogre6411
January 8, 2026 at 1:41 am
I love your stuff and watch you now. But without a beard you look like a young Tucker Carlson
@ignosegnose5578
January 8, 2026 at 1:41 am
I don't know how frequently people read new comments on old videos like this, But I think the thing that Searle and others are missing is that if a computer were to think it would be in a completely novel way that would be similarly impossible for us to understand.
Imagine if you will and AI that is sitting outside a room and the AI passed messages to the room and inside the room there was a front end computer program that translated what the AI was writing into the English language to be presented to a human. Surely the AI wouldn't understand English and wouldn't even think in English but that doesn't preclude the fact that the AI thinks all that it precludes is that it thinks in the way that we think.
@shuheihisagi6689
January 8, 2026 at 1:41 am
I would like more of an explanation of why if something can be explained algorithmically, it doesn't mean it runs on an algorithm. Isn't the universe kind of running on a huge algorithm?
@MrJaksld
January 8, 2026 at 1:41 am
I think it would be very interesting to reinvestigate this topic especially when it comes to more modern deep neural networks. Such GANs, CNNs, RNNs, and DNN for reinforcement learning. Especially models where you can put in a string of text and get back an image that matches it description. Or generating new text. I would also look at Von Neumann’s description of what a computer is.
@Reddles37
January 8, 2026 at 1:41 am
The Chinese room is a stupid argument because it mixes up its own analogy. The man in the room is only handling the inputs and outputs, he is like the eyes and the mouth of the system. Obviously your eyes don't understand what you see and your mouth doesn't understand what you say, but that is not some profound revelation. The book is the brain, it does all of the decision making and has all of the information. The book is the thing that understands Chinese!
If that seems silly to you, just take a minute to think about what is actually involved in this 'book'. It cannot simply be a list of answers for any given input, it needs to store a 'memory' of previous inputs and change its future responses based on the context. It has to be able to 'learn' new information it is given. And it needs to have an incredibly complex network of how different ideas relate to each other, so that it can correctly determine how to respond to a conversation covering a variety of topics. The required complexity and flexibility of the rules is such that the 'book' would essentially be capable of thinking for all intents and purposes.
In fact the idea of programming a long list of rules is pretty ridiculous, and researchers have long since abandoned this "good old AI" approach. These days the focus is much more on machine learning, where the computer learns rules and patterns on its own. This often takes the form of simulated artificial neural networks which operate quite similarly to neurons in a real brain.
@vlayneberry578
January 8, 2026 at 1:41 am
This interesting, but also largely based on old models. Part of that is that ML moves really fast, and there have been major advancements in AI in the last 5 years; another part of it is that this looking only at currently built technologies, and not considering the possibilities AI researchers are exploring nearly as much. I think it would be interesting to go through the arguments posited, and respond to each one in terms of possibilities for AI–and I think different arguments from different parts of this video are referring to different types of AI. But it's a lot more than I can put in a youtube comment, and than anyone will bother to read, lol. I might try to make a response video or smth of the sort sometime soon
@kiranduggirala2786
January 8, 2026 at 1:41 am
In case anyone's watching this in the future, I think there's an important caveat to add to Searle's Chinese Room. The Room is operating on the old framework of AI, where you program in questions and rules and use it to spit out answers. In this case, it would be relevant to point out that the intelligence would likely be a function of the programmer and not the room. However, AI no longer works this way. Modern Neural Networks are given questions and answers and use it to deduce rules (if you want to get into the math of it, I highly recommend the video series done by 3Blue1Brown). What this means is that it is able to generalize: it can take a question it's never seen before and give you an answer (something the person in the Chinese Room could never do). I think it is this property that allows us to say that the neural network is in some respects intelligent because it is not just following a preset rulebook.
I should note that my use of "question" and "answer", as well as my claim that neural nets can "generalize", should be taken with a grain of salt since the technology is not exactly there yet. First off, most applications are not in the realm of asking it questions per se, but rather classification problems (the most famous example is using the MNIST database to get it to interpret handwritten digits). This is where neural nets do extremely well since there are certain bounds on the problem that allow it to generalize to any given handwritten digit. However, in the more vague sense of asking it questions, neural networks struggle with generalizing beyond a certain extent. Despite the current limitations on the technology, I think this kind of generalizing AI would more resemble the argument that the room is thinking, since its ability to generalize is an emergent property of our mathematical models.
@codehorse8843
January 8, 2026 at 1:41 am
As far as modern science can tell there's no significant functional difference between a organic neuron and a mechanical neuron. We've made simple circuits with organic neurons and we've created organic learning with neural networks. As someone who is studying software engineering and is planning on working in the software development sphere I can tell you that most of the people who argue that a mechanical brain can't be conscious are not familiar with modern software and hardware advancements.
To me the question is not whether it is possible to create mechanical consciousness but rather whether we should. Frankly I can't see a single significant reason to do so, any proposed use of such technology could be circumvented with a simpler and less morally ambiguous neural network, if we create consciousness it will only be for sadistic or selfish reasons, such as endless simulated torture or the like or to show it off in some museum and go "look at this, isn't this a fine specimen!" And I think most people agree that neither of those things are good in any sense of the word.
@saumitragautam8333
January 8, 2026 at 1:41 am
Although I love AI and everything about it , I think that humans should start thinking for themselves rather than going after creating a machine that thinks. It's quite a danger – if we manage to create such a machine , then we are most likely gonna give up on thinking because then there will be a machine thinking for us , and thus in the process we are gonna become even more inactive than we are now. Technology has already made us physically inactive to a large extent. Let's just hope that we get to keep our heads about us after all.
@diablominero
January 8, 2026 at 1:41 am
I had a vague memory that someone British had done an interesting presentation on the Chinese Room argument, and guessed it might be computerphile, so I searched youtube for "computerphile chinese room" and it figured out that I was thinking of this video despite that I didn't know myself. Either the search engine is rapidly approaching actual thought, or it's literally magic.
@billames4367
January 8, 2026 at 1:41 am
What is thinking? It is the mind trying to rationally find the solution to a problem. Many things interfere with the meat computer's proper (rational) operation. Genetic defects, degradation of connections due to age or lack of blood supply, drugs of many types, a very bad life (no opportunity or bad choices), sickness of body or mind resulting in a poorly functional or relatively unused mind. Opportunities to think (to solve a problem) can be impeded when a problem is not recognized as such. Accepting a bad situation may be a lot easier than fixing it (thinking how to solve it.)
@billames4367
January 8, 2026 at 1:41 am
It seems that thinking begins when a mind needs to solve a problem not previously solved. A major task for the programmer is to organize acts of problem-solving as single entities to eliminate duplication and support subsequent combining (as when thinking.) In young children, a problem solved, for example, is getting food from a bowl to the mouth using something other than fingers. Or drinking from a glass and not spilling it all over themselves. A first grader will have solved a lot of problems. By the time the child is thinking, combining solutions, infinitely more.
@billames4367
January 8, 2026 at 1:41 am
If you have succeeded in making your first grader AI work you were lucky. Here is the problem, a first grader does not think, they process data. If given a problem they search through their memory for previous solutions to the problem. When you ask them a question it will not be a difficult one, you do not expect them to know much. Your first grader AI is not going to take over the world. You have a long way to go. When you realize that to move on to grade two you need to add data for all the things this mind will have experienced in the year they lived while in the first grade and were expanded on for when they entered second grade. Ask yourself this question, at what age, what grade level do you expect these child AIs to be able to think? When do real children learn to think?
@billames4367
January 8, 2026 at 1:41 am
So, you have your first grade AI programmed? What did you do for the following: Gender, race, physical makeup, personality type, family (siblings?), pets, responsibilities, social status, home life, intelligence, traumatic life events so far, skills? All the other children have these parameters and have been engaged with them for 5 or 6 years. They have limited life experience and probably do not remember much detail but they were selectively programmed by environment and situation. If there are 30 in the class they probably represent many combinations and even though they are different from each other they do not yet know how different they are from each other.
@billames4367
January 8, 2026 at 1:41 am
Here is how to start developing a real AI. Have a remote classroom of first-grade students. They see each other's photos on their screens, including themselves (not live.) They can message each other by typing below an image and only the two students plus the teacher sees it. If they are responding to the teacher all see what they type. The teacher teaches the class as appropriate for a first-grade class. Breaks are scheduled hourly. Your AI will be just another kid on screen.
@billames4367
January 8, 2026 at 1:41 am
How do humans think? When do we think? What do we think? Why do we think? Now, replace "humans" above with "AI" and record your new ideas. Do you think with words? Sentences? Images? Scenes? Can anyone describe how we think with this meat computer and subroutines run by a QED based processor? Until we know ourselves well enough AI of any real kind is a wish.
@billames4367
January 8, 2026 at 1:41 am
If you want an AI that is as competent as a human than you need to provide it with sensors, needs, memory, processing capability and 40 or so years of experience. That chess program, it is not AI, it can not play "mind games" with the opponent. Perhaps it should be Alternative Intelligence? I would be very interested in the dreams your AI has, once it reaches this level of completeness.
@billames4367
January 8, 2026 at 1:41 am
Having watched the video here on AI it motivated me to subscribe. Hearing about how we are not sure how the human mind works caused me to think of an answer. As humans have at their disposal a functioning mind we can experiment. Some humans dream a lot, I do so daily, every day. I often wonder what mind is doing "the thinking" behind those dreams. I have had ideas about that. I will give this place here my attention. This does not seem to be a difficult question, I have not given it my full attention.
@alexare_
January 8, 2026 at 1:41 am
I know it's been awhile, but I would love to hear a discussion between you and Robert Miles, a Youtuber with a channel on AI safety. I think it would be very interesting to see what thoughts each of your respective expertise inspire in the other.
@tessarnold7597
January 8, 2026 at 1:41 am
2 things: 1) the idea of the mind being like a computer is, like most mind metaphors, completely backwards. We created computers to be an amplified aspect of the human mind. So the comparison is a lot like, "I built this chair to do the job my legs do, only better. So, how are my legs meaningfully like this chair?" 2) The problem with the Chinese Room thought experiment is that it assumes the person inside the room has a definite separateness, in that the person alone could never learn the language even if they wanted to. But the person in the room is only part of the whole system. The book of precise rules is the other – in so far as processing the language goes. It's a polar system. Either, without each other, would be incomplete. There is an input, the rule book is the processing, the man creating symbols is the output. All three are inextricably linked to one another. It is the mistake of thinking we can separate things into discrete categories all the time. Some systems can not be understood that way. Consciousness, or general artificial intelligence, in this case, is really one of those types of systems.
@gepisar
January 8, 2026 at 1:41 am
7:50 just wanted to ask for a bit of clarification here.. "Predict weather"? Or Make a forecast? Philosophically, im guessing, these are not the same?
@kfjw
January 8, 2026 at 1:41 am
I think that a very important missing component this is a discussion of emergent phenomena. Many of the AI models in use currently do not operate on easily quantifiable rules. That may sound like a contradiction, given that they are implemented using computers, which operate entirely through explicitly quantified rules.
What I mean is that high-level descriptions of the system's behavior aren't easily quantifiable. When a computer vision system recognizes an object in a picture, there's no code you can point to that explains why it made that decision in terms that can be understood on any high level. Instead, it's just the way the data flowed through the neural net or decision tree or whatever.
I would really like to see the Tube Man do a video on emergent phenomena, and possibly chaos theory as well. This would cover two of the most important ways that deterministic systems can produce unexpected or unpredictable outcomes.
@FrankCirillo94
January 8, 2026 at 1:41 am
Do an update on this video Ollie!
@ATMOSK1234
January 8, 2026 at 1:41 am
"The weather is not controlled by an algorithm"
My weather dominator disagrees.
@Pfhorrest
January 8, 2026 at 1:41 am
The qualia problem goes away completely if you just adopt pan-proto-experientialism. Everything "has qualia", has a subjective first-person experience of what it's like to be that thing, but the quality of that experience varied in accordance with the function of the thing. A thing that functions a lot like a human brain will have a subjective first-person experience, and "qualia", a lot like that of a human. Things less and less like humans will have less and less similar experienced, but still have some kind of experience. Even a tree, a rock, an electron, have a first-person experience, but their functionality is so much simpler and so unlike a human experience that there really isn't anything of interest to say about them. But they're technically there, and our experiences are built up out of complex networks of those kinds of simpler experiences, in the same way that our behaviors are built up out of complex networks of the simpler behaviors of the things we're made of.
@Pfhorrest
January 8, 2026 at 1:41 am
Searle is right that syntax doesn't equal semantics, but that's because semantics is about relating symbols to experiential phenomena. The room needs eyes and ears and so on. If the rulebooks in the room were also picture books, and the-cow-says-moo type books, and scratch-n-sniff books, so the symbols could be related to those sensory phenomena, and the man memorized all of the books, then the man WOULD have just learned Chinese. There is a functional difference between Searle's stock room and a native Chinese speaker: you can pass a native Chinese speaker a picture of a duck on a lake and ask "what kind of bird is on the water?" and it will give you an answer, while the man with just symbol-manipulation rules without any reference to images has no way of deciding what answer to spit out. Computer vision and hearing and so on are all about translating sensory experiences into abstract symbols, so there is no reason that a computer with all of the linguistic programs AND computer vision and hearing and so on could not in principle genuinely understand language, and demonstrate it by performing identical functions to those that human Chinese speakers do.
@mmmk1616
January 8, 2026 at 1:41 am
Oh, you deserve so much more attention! More subscribers!
@JesterAzazel
January 8, 2026 at 1:41 am
Emotions are very important for decision making. There are case studies about it. That could explain that "botched" decision making process mentioned around 8 minutes in. It's not exactly botched as much as necessary, to keep decision making from being an overwhelmingly difficult task, as it was for Eliot.
@raqueljacobs1542
January 8, 2026 at 1:41 am
I’m a Christian and a Marxist and I think this reading works quite well. I also think that if you take this reading to it’s logical spiritual extreme, it pronounces that given the choice of all the political theories in existence, the God of the Hebrews would have gladly chosen Anarchism. In fact, if you read into the Old Testament he literally states that government isn’t necessary because he prefers to rule his people directly. And by directly ruling those people he means no government. He objected when the Hebrews asked him for a King reluctantly choosing Saul because he was an OK choice. Saul screws the pooch and he reinvents the wheel and picks a shepherd to run the country. Shepherd screws up, so God picks wise man to do the job. Wise man screws up, story keeps going the same way. Finally God picks a guy who doesn’t screw up, himself, And they 🤬 kill him‼️Yeah, that makes sense😐
@diablominero
January 8, 2026 at 1:41 am
I would argue that there is no meaningful difference between applying a decision-making system that can be perfectly predicted using an algorithm and simply applying the predictive algorithm to the decision. If some dude with super advanced technology were deciding the weather using an algorithm and applying his decision to the sky in a way indistinguishable from natural phenomena, that wouldn't be meaningfully different from what we have now.
Oh, and by the way, Glenn Seaborg actually made a few thousand atoms of gold out of bismuth using a particle accelerator. The alchemists were centuries too early and taking the wrong approach, but their goal was theoretically possible to achieve. Saying AI is like alchemy is essentially conceding that it's possible, but arguing that it's still centuries away and won't be cost-effective.
@larsbarneveld6993
January 8, 2026 at 1:41 am
the background music is realy distracting
Comments are closed.