Suppose we have already made a robot which behaves exactly like a human being. When you talk to him, you do not even realize that you are talking to a robot. The kind of face expressions he makes, the way he talks, the way he behaves makes you feel that he is in reality a real human being!
When you pinch on his hand, he tries to save himself by taking his hand away, with wrinkles on his face, exactly like a human being. This makes you feel that robot is actually feeling the pain. Lets further try to dig into it and find out what is happening inside it. Lets try to see what must/can be happening inside such a robot.
Lets take the case of his behavior when he is pinched on his hand. Since this is a robot which can behave exactly like a human being, so there must be touch sensors spread all over his body. When his hand is pinched, some touch sensors recognize the pressure and send a signal to the operating system installed somewhere in the robot. This signal comes to operating system in the form of an interrupt. There must be an interrupt vector in the operating system which maps this interrupt to a subroutine which is supposed to execute. As soon as operating system receives the interrupt it triggers the corresponding subroutine. In the subroutine it is calculated how much pressure on what location is applied and correspondingly what action need to be taken. This action is also predefined or learned on the basis of past training. In this particular case it happens that robot takes his hand back and makes a wrinkly face. This is all what could be happening inside the robot, an interrupt giving rise to the execution of a program, giving rise to the movements of some limbs. One might say that this is very naive explanation of activities going on inside the robot. I am ready to take any kind of complex processing/model you can bring. You can bring as advanced data mining algorithms as you can, you can bring as complex artificial neural networks as you can, you can bring any kind of “computational” model as you can and make a robot which behaves the way I described and give an explanation of what is going on inside the robot. I am ready to take your version of robot.
The question which comes here is when this robot is pinched on his hand, did it actually feel any kind of “pain”?
I say a strict No. If you say Yes, then you must explain how and where? I could not see any place in robot either hardware or software which may be responsible for the “feeling of pain”. What we have done in above example is, we have made a robot which is “behaviorally” same as a human being, but can a robot be “intentionally” same as a human being? We can simulate the “expression” of pain but can we simulate the “feeling of pain” itself?
There may be a lot of processing happening inside the robot, it may be as complex as you can imagine, but processing doesn’t imply there is some feeling inside it. If there is feeling then our computers, tube lights, fans, geysers any kind of complex devices will start “feeling” different kind of things, but we do not see any kind of manifestation of this.
Here it is implied that any kind of “computational” model is not sufficient for realizing “feelings”. One can simulate the behavior/expression part of it but realizing feelings is entirely a different case than simulating it.
This thought experiment also negates the possibility of Strong AI. Strong AI presumes that a machine can realize all the human phenomenon. In above thought experiment we see that it is not possible. It shows that Strong AI is wrong. A robot is nothing but a feelingless mechanical device. It is not that in a machine feelings are partial or to some percentage, they are absolutely zero. It is also not that with the advancement of science and technology, in future we will be able to develop a robot which will be able to realize all the human phenomenon. A robot can “never” feel pain or anything, independent of how advanced science and technology become. It is “in principle” not possible for a robot to feel pain.
I know it will create a lot of questions and arguments in readers minds for which I request them to comment on this blog. I would like to suggest readers to limit themselves to the machine and then discuss the issue rather than coming to a human being. Believe me discussing a machine is easier than a human being.
Please read following article I wrote later to understand the concept much deeper:
Anything that can be understood can be programmed. But since, we do not understand how the brain is able to manifest intentions and feelings, it is very unlikely that we will be able to program a robot to manifest intentions or feelings.
Given the above argument, to say that Strong AI is impossible is assuming that we will never be able to understand how the brain does what it does. It may not be now, but future is an unknown territory.
Humans have undergone several millions of years of evolutionary process. Whereas Computer Science has evolved multi-fold in a history of less than 100 years. Given sufficient time, Strong AI is a possibility.
Thanks for the comment. I appreciate it.
In the post my main focus point was to negate the possibility “in principle”. In principle in the sense that it is never possible, independent of time, on any kind of advanced computational model. I will try to respond to your argument here.
“Anything that can be understood can be programmed.”
We can always simulate the behavior which I already said. We can simulate the feeling of pain behaviorally but how do we simulate the “feeling of pain” which one is experiencing? Which unit in any kind of robot will feel the pain? operating system? CPU? memory? flow of signal? which one? I do not see any!
It is not sufficient/good to say that something will happen in future in science and technology which will do this for us. What is the basis of this argument?
Anything which can be understood can not be programmed. I will give you some more examples for it.
Suppose we simulate rain, then will it make you or me wet? or any computer wet? We understand rain though.
Suppose we simulate gravity then will it start attracting objects in real world? We understand gravity.
Kindly read the chinese room argument also which I have mentioned in the end.
There is a difference between simulation and reality. Simulation is not reality, it may be just the depiction of it.
If we simulate rain using an appropriate machine it will surely make me wet.( e.g cloud seeding is a simulation traditional rain) But the question of whether the cloud seeding device understood rain remains.
Simulation can be real some times with understanding and sometimes without understanding.
Other perspective is to question ourselves how hard is this task for humans. Feeling of pain is highly subjective. Imagine an Indian doctor treating the pain of an English woman with the assistance of an African nurse. There are many more examples.
Thanks for your comment.
I do not argue on the subjectivity part of it. I question the “mere existence” of “feeling of pain” at the level of robot. While talking about humans we make a lot of assumptions, so I requested in the blog to limit ourselves at robot.
With Regards,
Devansh
If a tube-light, fan or computer or any other thing is feeling pain how would you know about it?
The questions whether they do or do not feel pain is irrelevant since you don’t have any means to detect it.
And why should you define the pain the way you feel it. If some entity feels a different kind of pain which is outside of our understanding then would it be fair for us to say they don’t feel pain?
For example it is widely understood that plant feels pain. I am sure the way they feel is different from the way we feel, but that doesn’t mean that the feeling itself is non existent.
Thanks for your comment. I will try to respond to it.
Actually here pain is just an example, I am talking about any kind of “feeling”. My argument is, a robot can not have any kind of “feeling”. It is not about this kind of feeling or that kind of feeling or any different kind of “feeling”. It is mainly about the existence of feeling itself. The reason I mentioned in the post itself that reader should try to limit himself to the case of robot is, because we can agree upon certain things in case of robot and talk about it since we know about it. Kindly try to limit at the level of robot. We will talk about other things as well but separately.
I say that in a robot there cannot be any feeling. If you say there can be, then kindly explain how and where? Here I must mention that we have to agree upon a premise that “Material atom can not ‘feel’ anything” and also we are also agreeing upon certain definition of ‘feeling’. If we do not agree upon this assumption then we can not talk. If we agree upon this assumption then kindly explain how and where is pain in case of a robot? If we can talk about robot then issues related to tube-light and other devices get automatically answered.
With Regards,
Devansh
Looks like the post came after *the* small discussion we had on the same topic. Keep going!
Actually the idea of writing blogs on these issues was already there in my mind since a long time. It is just that I did it now and it also happened to be just after I talked to you. It is just a matter of coincidence.
I think this is the first comment from you on my blog. Keep visiting and commenting.
Requesting that the responses limit themselves to the robot misses the key point here: The author has not shown how the questions he posits can be answered when the individual under consideration is a human being.
The author is trying to shift the burden of proof onto the apologist for strong AI.
For words like pain, concious and understand to be meaningful we must have descriptions of these (possibly not complete descriptions) that are consistent with members of homo sapiens experiencing them (humans).
So the author first needs to investigate pain in homo sapiens. To convince, he must establish several properties P, Q, R … of pain that satisfy the following:
1) P, Q, R hold for a human experiencing pain.
2) P, Q, R could not be experienced by software running on a computer, irrespective of the complexity of the software, the power of the computer and the computer’s architecture (probabilistic, serial, parallel etc.).
3) P, Q, R are non-arbitrary.
I don’t think that any such description of pain is possible and so I am highly skeptical of the author’s claim that “robots could not feel pain”.
Furthermore I would point out, that for there to exist such P, Q, R and if P, Q & R are empirically observable, then the laws of nature must not be Turing computable (even if we allow for randomness). This would be a massive claim that amounts to a revolution in physics. However, if the author allows the properties to be not empirically observable then he cannot justify why he believes other people experience pain rather than just himself.
The author saying that rain in a simulation does not make things wet because he doesn’t get wet from the rain in the simulation (not being inside the simulation) is like me saying that rain in new york doesn’t make things wet because I do not get wet when it rains in New York (I live in Europe). It’s completely irrelevant. I would argue that (in a sufficiently detailed simulation) beings inside the rain simulation would get wet. I should note that a sufficiently detailed simulation is way beyond our wildest dreams in terms of computing power.
Incidentally I have read the Chinese room argument and agree with the systems reply.
Thanks for the comment Barnaby. I am very happy to see your comment. Your comment talks on the crux of the issue, which other comments did not. I will try to respond to it.
I will start from the ending of your comment.
“Incidentally I have read the Chinese room argument and agree with the systems reply”
I disagree with the systems reply and I would give the same reason which John Searle gives to that. It is absurd to say that a set of rule books, input, output and a human being inside the room are understanding Chinese. I would like to hear some counter arguments from you in reply to systems reply.
I think this is the crux of this whole issue and if we do not agree to this then we can not talk about other issues.
“simulation does not make things wet because he doesn’t get wet from the rain in the simulation (not being inside the simulation)”
Me being in the simulation reminds of movie Matrix. The argument which comes here is, if I am in the simulation like Neo then how the “feeling of wetness” is getting generated in the simulation? and who is feeling it? You might give the similar argument like Systems Reply and I will again ask for a counter argument.
“Furthermore I would point out, that for there to exist such P, Q, R and if P, Q & R are empirically observable, then the laws of nature must not be Turing computable (even if we allow for randomness).”
You got it right. Pain can only be observed by first person experience. It is not empirically observable and I never said that laws of nature are Turing realizable. My claim is, laws of nature are NOT Turing realizable. Same argument as of Matrix I gave above. This argument boils down to the debate on the difference between simulation and reality. You might say whatever we observe in reality can be simulated, I will say, it can not be. It starts from our disagreement on chinese room argument itself for which I seek your counter argument.
“The author is trying to shift the burden of proof onto the apologist for strong AI.”
You got it right, this is what I am trying to do. Thanks for saying this so explicitly.
I did it intentionally to not to talk about human since in case of a human being we assume several things, which in case of robot we do not do much. We can talk more easily in case of robot than a human being where we both might lack knowledge or disagree on the basic assumptions themselves and we will have no way to get out of differences. I hope you understand and I would request you to kindly limit ourselves to the robot only. Lets first talk about this and then we will come to human being if we agree in case of robot.
I really appreciate your comment. It made the discussion richer.
Kindly keep visiting and commenting. I would like to know more about you.
With Regards,
Devansh
Sytems reply: Searle claims that “it’s just ridiculous to say that while [the] person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might”.
There are two points to say here: Firstly Searle is merely asserting his claim in a forceful manner here. I happen to think Searle’s argument is obviously ridiculous but just asserting this does not advance my counter argument!
Secondly Searle’s point is very misleading. Based on an understanding of the human brain we can estimate the amount of calculation that it performs every second, its memory storage capacity and use these facts to estimate what resources an AI (or the person in the Chinese room) would probably require. In the human brain there are about 100 trillion synapses firing at around 200 times a second (these figures are rough, many sources disagree significantly). So it is reasonable to expect a (human level) AI to require about 20,000 trillion operations a second. Assuming that the person in the Chinese room can perform 1 operation a second (which is unrealistically fast) it would still take them nineteen billion years (i.e. more than the age of the universe) to simulate half a minute of the AIs thoughts. Likewise the memory storage demands of an AI are likely to be more than 100 terrabytes (at least a byte per synapse) which is probably greater than the textual data content of the British library. So describing the Chinese room as a person and bits of paper misses important details about the scale of what is being imagined and casts significant doubt about the reliability of our everyday intuitions regarding the thought experiment.
Now Searle also makes a more serious attempt to rebut the system’s reply:
“let the individual internalize all . . . of the system” by memorizing the rules and script and doing the lookups and other operations in their head. “All the same,” Searle maintains, “he understands nothing of the Chinese, and . . . neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn’t understand then there is no way the system could understand because the system is just part of him”
Searle makes crucial mistakes here. He assumes that mind is indivisible and that more than one mind cannot occupy the same brain. But this is begging the question as the position he attacks (strong AI) implies that more than one mind can exist in the same substrate. This isn’t considered a problem by apologists for strong AI.
Its also worth pointing out that the human brain is almost certainly not capable of memorising enough data to simulate an AI. The world record for digits of pi memorised is 68,000 digits (base 10). This would have to be surpassed by a factor of more than 3 billion times to come close to our memory estimate for an AI. So although most people don’t seem to have fully fledged secondary personalities coexisting with theirs in their brains I challenge anyone to convince me that this couldn’t occur with a being possessing a brain more than 3 billion times the size of the human brain!
So to sum up: Searle assumes that one brain has one mind but gives us no reason to think that one brain two minds is logically impossible. Furthermore our everyday intuitions are so out of their depth given the scales imagined in the thought experiment that we should be very skeptical of it.
Totally agree with you.. in fact i never had a doubt about it..
Thanks Piyush. Keep reading and commenting 🙂
You would probably not realize that the issue which I raised in this post is not obvious for many great scientists as it is to you 🙂
@Barnaby Dawson:
Thanks for the comment again.
I really doubt if you understood Chinese room argument correctly or I should say what Searle really wants to say. Searle in my opinion is trying to say entirely a different thing than what you understood.
I will mention the key points here and would request you to kindly read following links:
http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_chinese_room.php?modGUI=203&compGUI=1863&itemGUI=3256
http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html
Some points:-
1. Syntax is not sufficient for semantics.
2. A program is nothing more than symbol manipulator. How can some input symbols, program and output symbols can understand Chinese. Understanding is associated with “meanings” representation is different from what is being represented.
3. What all can happen in a machine is nothing more than symbol crunching which can not ensure understanding.
Kindly go through the links which I mentioned. In my opinion you have not understood Chinese room argument.
With Regards,
Devansh
Devansh: I have read both of these articles before and dozens of other articles on the subject. I am fairly confident that I understand the original thought experiment. I should mention I have training in mathematical logic and my PhD thesis was closely related to topics in philosophy. Furthermore this is a question I have considered in great depth. Of course, I may have missed a subtlety, but I’m afraid you’ll need to point it out in that case, as I’m unlikely to find it just by rereading these articles.
Your point 1 is part of Searle’s attempt at formalizing his argument which I have also read. Here is my response to that point:
If we are meant to understand semantics in the sense of mathematical logic (where such arguments might be made) then Searle must show that the human mind has semantics. This he has not shown.
If we are meant to understand semantics in the everyday sense of the word (i.e. ‘understanding’ in natural language) then Searle has not demonstrated that syntax does not suffice for semantics.
The other two points are essentially dependant on the first point. Whilst computers are just symbol processors and this does put limits on what they can do we have no reason to think that the human brain is not also just a symbol processor (albeit with a lot of random noise added). This would imply limits on what the human brain could do. But its not enough to assert that such limits don’t exist. Evidence is required and Searle does not provide it.
A couple of points to consider regarding your response to my comment: Asking someone to read two long articles before replying to a point could be seen as an attempt to shut down debate (by demanding exceptional extra effort from that person). Whilst this might be appropriate if an individual showed a clear lack of understanding of the fundamentals of the argument that is not the case here.
@Barnaby Dawson:
Thanks for the comment again. I am happy to know about you. it is my pleasure to have a discussion with you. Kindly do not consider my request for reading those two articles as an attempt to close down the debate though it may seem so. I must mention that I do not intend to do so. I apologize in case it is found so. I must appreciate humility in your arguments. I am happy to have a discussion with you.
I will try to respond to your arguments:
“If we are meant to understand semantics in the sense of mathematical logic (where such arguments might be made) then Searle must show that the human mind has semantics. This he has not shown.”
In my opinion he has shown this. Suppose in a program word “Tree” comes and this program is supposed to manipulate this word and produce some output then this program doesn’t know that to which reality in world this word “Tree” belongs, but a human being does “understand” the “meaning” of word “Tree”. In this sense these words understanding, meaning and semantics are used.
“If we are meant to understand semantics in the everyday sense of the word (i.e. ‘understanding’ in natural language) then Searle has not demonstrated that syntax does not suffice for semantics.”
I think Searle has answered all these argument, may be implicitly. Representation and what is being represented are two different things. That is where the argument of syntax and semantics comes. A program doesn’t know the associated reality in world but a human being does!
“The other two points are essentially dependant on the first point. Whilst computers are just symbol processors and this does put limits on what they can do we have no reason to think that the human brain is not also just a symbol processor (albeit with a lot of random noise added). This would imply limits on what the human brain could do. But its not enough to assert that such limits don’t exist. Evidence is required and Searle does not provide it.”
I think reply to this is covered in above two responses.
Kindly answer me following questions I have from you:
1. Does this word “semantics” has any existence for you?
If yes, then kindly answer following two questions:-
2. What does semantics mean to you?
3. Could you kindly explain me what could be the way in which we can put down semantics in syntax?
With Regards,
Devansh
PS: My English is not very good. In case it is found to be dis-respectable then kindly forgive me.
For me semantics has two meanings. Firstly in mathematical logic there is a precise definition of semantics but the human mind almost certainly has no access to this type of semantics. I base this viewpoint on the assumption that no new (non Turing computable) physics is required to explain the functioning of the human brain.
Secondly in everyday language ‘understanding’ is a soft concepts like the words life, complex and easy. None of these concepts can be rigorously defined as they are used in so many different ways depending on context. I also think that the truth value (not binary) of ‘X understands Y’ is relative to the observer. This is critical because it disarms Searle’s arguments regarding rocks having mental states.
I prefer the second usage of the word as I think we want to apply semantics to human languages!
To answer 3 in detail would be essentially to design a human level AI which is beyond the abilities of our best minds at this stage.
However, I would say that semantics should be modelled as syntax by a program that models the world in great depth by searching for patterns and correlations in its input data from the world (as well as testing assumptions of the model through physical behaviour). When new data comes in, the program could then represent this new data relative to this model. A sufficiently detailed and carefully balanced model would then BE understanding. However, building such a model is not possible with todays computer hardware and we also lack sufficiently sophisticated programming techniques to implement this idea.
If you want to read more about this viewpoint you could read Dennett or Hofstadter. “I am a strange loop” by Hofstadter is particularly good.
@ Barnaby Dawson:
Thanks.
I think we disagree on the basic assumption and understanding of Searle’s argument itself.
The basic assumption you have made is “reality or whatever is or is perceived, is Turing Computable or Turing Realizable”. I disagree on this assumption itself. In my opinion Searle argues on this basic assumption itself and I find Searle’s argument quite convincing, in fact in my opinion Searle’s argument is irrefutable. Anyways, I do not intend to force my opinion on you and I can never do that. You can consider this as my opinion.
I will definitely read more on your point of view. The book/article you suggested me, I will go through it. Thanks for that.
I must also mention that in my opinion it requires some sort of introspection on an individual’s part to understand the meaning of “meaning” and what is representation and what is being represented. In my opinion, this introspection can help a lot in understanding that meaning is non-representable. When I talk about introspection, you may call it non-scientific, but forgive me to say that in my opinion this is how it is. Again, kindly consider it as my opinion.
Kindly be in touch.
With Regards,
Devansh
A few points:
1) I am assuming that the physics relevant to brain functioning are Turing computable NOT that the whole of physics is Turing computable.
2) If my assumption in (1) is false then this implies substantial new physics. Any such new physics would be a revolution in science greater than Einstein and Darwin combined. Furthermore for it to support your position the physics needs to be relevant to the operation of the brain. Whilst it is not impossible that you are right the burden of proof clearly lies with your side of the argument.
3) Introspection has long been known (for centuries indeed) to be inherently unreliable and should not be used as evidence of anything important let alone granted higher epistemic status than the most precise and rigorous science we have; namely physics.
4) The Chinese room argument is heavily contested and has been described as “religious diatribe against AI, masquerading as a serious scientific argument”. As I’ve outlined above the Chinese room argument depends on basic flaws in reasoning. Many very intelligent people share my view that the Chinese room argument shows nothing.
Point 4 doesn’t mean, of course, that there are no other possible arguments against strong AI. However, independent of this I don’t se any other convincing arguments. In my opinion strong AI is almost certainly correct with the residual uncertainty entirely due to the uncertainty in point (1) alone.
@Barnaby Dawson:
Thanks for the discussion. Probably I will be repeating the similar arguments in different language. I will just add short responses to your comments:
1. The reason I wanted to limit the dialog to a machine or computational model is to put the burden of proof on the proponent of strong AI. I am not interested in discussing human mind at this stage. I can surely say that a computation model is insufficient for semantics, independent of time and Searle’s argument proves it as I said before.
2. I totally agree that introspection has been considered a poor means to talk about a theory or build a theory. This is why I mentioned it as my personal opinion rather than a scientific proposition. You can take it as my personal opinion rather than an argument.
The reason I used the reference of “Introspection” in my last comment is that, it sometimes might help. In fact if you see entire scientic study of consciousness these days is based on high usage of first person experience. Here I also agree with your point that it is unreliable. There is a need in my opinion to find out means to get some reliability out of unreliable means.
3. “religious diatribe against AI, masquerading as a serious scientific argument”. Many very intelligent people share my view that the Chinese room argument shows nothing.
I would say what you have mentioned in this point is your opinion and the opinion of very intelligent people you have mentioned.
It doesn’t really matter if very intelligent people share your view. I can show you many very intelligent people as well who find Searle’s arguments irrefutable and I would not say that Searle’s argument is true on the basis of their authority. What I am saying is on the basis of my own verification and authority. I find Searle’s argument irrefutable. You may disagree and that is totally fine.
4. I find substantial flaws in the presumptions of Strong AI.
As I had said in my last comment, we disagree on the basic premises and understanding of Searle’s argument. Unless there is no agreement we can not reach to a common conclusion.
With Regards,
Devansh
@Dawson thanks the discussion made a interesting read. one dount can u elaborate
”
P, Q, R and if P, Q & R are empirically observable, then the laws of nature must not be Turing computable
”
I didn’t understood how u claim that.
@devansh Would like to add few small points.
a.) U said
“Suppose in a program word “Tree” comes and this program is supposed to manipulate this word and produce some output then this program doesn’t know that to which reality in world this word “Tree” belongs, but a human being does “understand” the “meaning” of word “Tree”. In this sense these words understanding, meaning and semantics are used.”
Imagine a person who is blind by birth and has never seen a tree, so since he doesn’t really understand what is a tree (assume that he never even touched it either), he will formalize a tree if somebody explains it to him. now that description might or might match to reality as u perceive it. furthermore lets assume a guy who precieves the whole word in a manner much more different then you ( there are people who perceive
thing in diff manner , for example I read about a guy who used to see equations dance and then magically forming the solution). What I am driving is that the representation of tree in a computer might be just a a string which is how lets say the computer perceives reality. Now u should not expect its “reality” to match your reality cause the hardware and input systems are completely different.
2.) The argument you propose is not a new one. please read about “Philosophical zombie”, there are well established arguments against the same. notably
“Nigel Thomas argues that the zombie concept is inherently self-contradictory: Because zombies, ex hypothesis, behave just like regular humans, they will claim to be conscious. Thomas argues that any construal of this claim (that is, whether it is taken to be true, false, or neither true nor false) inevitably entails either a contradiction or a manifest absurdity.”
@Ojasvi:
Thanks for your comment 🙂
1. It is not about the example of “Tree” actually. It is about the concept of “meaning”. There is no concept of “meaning” or pain or feelings for a machine. If you feel there is, then kindly explain me how. I have elaborated in the post for a robot, kindly take that example and give your counter argument. Meaning is the association of the word with the reality. For a machine there is just the word or some symbols and no association with the reality. In a machine the “seer” or the “person” who will make association of symbol with the reality is missing. You may collect a lot of images, videos or anything but they are not thing but symbols to a machine! Who will understand all that data in a machine? A machine is just doing the symbol manipulation, in which there is no understanding. Kindly read chinese room argument which I have mentioned in the post and the comments again if required.
2. We need not to go to the argument of philosophical zombie. Take a robot who says to you that “I felt pain” when you pinch it and also makes such an expression which gives you an impression that it actually did! But saying and expressing doesn’t really imply that there was actually some pain. If that robot really felt pain then tell me how and where? Same is the case with your zombie.
With Regards,
Devansh
@Dark Knight: If P,Q and R are observable and the universe is Turing computable then we can write a computer program that exhibits P,Q and R (we need that the properties do not depend on arbitrary labels here. But this contradicts the idea that P,Q and R together rule out an AI experiencing pain as an AI can satisfy P,Q,R.
Of course one could argue that the human mind must also possess some other ineffable quality. But then thats not going to be observable and you come up against the other minds problem.
So essentially, if one is to avoid the other minds problem and wishes to conclude that the AIs could never experience pain then one must invoke non Turing computable physics.
Hi Devansh,
I must say a cool thought process but sorry, I could not follow the concept of “feeling of pain”. Let’s try this question in other way, let’s define the pain itself. What is pain ? .
Possible Answers
1. Pain is the feeling that we don’t like .
If we don’t like something, we should not follow any kind of interactions with that. This may be the way we human define the pain.
Let me put Robot point of view,Say magnet of same pole ( North & North ) they repel each other, which can be termed as One magnet is having pain from another magnet and repelling each other Or there is a fight going and these two don’t want to get together or like poles ( North and South ) attract each other, which can be termed as a relationship between these two.
Both define the way they can. Point is we humans try to modify their definition with our logic, thus feeling confused but think about this, Robots and hardware are in different world unknown to us, having LEAKAGE OF CURRENT ( as sick feeling ), HIGH VOLTAGE ( as friendship ), LIKE POLE ATTRACTION ( as love ), SHORT CIRCUIT, PROCESSING SPEED ( more it has more richer it would be ) .
What I am saying is, Robots are of different world they have their own definition, own feelings. It may not be possible to map those feelings to our world with same expressions. There may be a point in which we can try to teach Robot our reactions, but that would be artificial to them, that’s why these type of question arises ” Robot having a pain or not “.
Thanks for the comment.
I think I have responded to these arguments in some of the comments above. Kindly read Chinese Room Argument as well in the link which I have mentioned in the end.
You can do a thought experiment of building a robot in lab which will have all human phenomenon realized into it and see what all difficulties you face. I think then probably you will be able to appreciate the point.
With Regards,
Devansh
Hi Devansh,
Please read the following comment without the bias of being a proponent of Searle’s argument. I will be shifting towards the human side of the argument as it is the human who is the observer (or human who wants to believe that the robot “felt” pain).
Let us say there is an alien from some other planet. If we pinch this alien and he/she says “Aah, that hurt!” and makes a painful looking face, would we not believe the this alien “felt” pain. Perhaps if you later find out that this was not an alien but just a robot simulating human behavior, you would say that the robot did not “feel” pain. What I am trying to point out to you is the fact that believing is on the part of the observer.
Feelings, intentions, understanding are abstract concepts as of now. There is no PROOF in the strictest terms other than our “understanding” that says that they exist.
Can you prove me that this world is not a simulation as in the movie The Matrix? Can you prove me that we are not robots? With robots we identify a sense of external control, but a robot with free will cannot be identified as a robot unless explicitly told. This point comes directly from the Chinese Room Experiment that a human cannot distinguish between a robot and a translator.
To put it in short, ‘Can a human prove that he “felt” pain?’ It is because we all are humans and we have similar experiences, we believe it to be true. For a family of robots who experience the same “pain” will too “feel” it.
I have read the Chinese Room Experiment and other links that you have posted. I may not be as learned in this area but saying something is IMPOSSIBLE is a very strong statement. I have tried to understand your point of view, but the basis on which you have put forth your arguments is not foolproof.
@Romanch:
Thanks for your comment. I appreciate your effort in reading the Searle’s argument. I will try to respond to your comment. It might be possible that this series of comments here might not be sufficient to resolve this issue here, in that case I would suggest a face to face talk, that might definitely take us further. Anyways, following are my comments:
1. I am not biased towards Searle’s argument, though it may seem so. In my opinion it is just the conviction which has come out of verification, which might seem like this.
2. The reason I did not want to shift to human side since in case of humans we make several assumptions which in case of Robot we do not do much. In case of robots we can expect an objective conclusion but in case of human it becomes very subjective. We can talk about human as well but I feel that talk will be easy if we resolve the issue of robot first.
3. As an observer what I can only notice is the behavior of other person. In fact I can not say even for a real human being whether he felt pain or not if it doesn’t come into his behavior (and even if it comes), forget about an alien. Pain is personal to the person who is feeling it. Only that person who is feeling it can report its presences, so you are right in saying that I can not observe/prove whether other person felt the pain or not. This is what is the limitation of what we call “Proof”. Things which are personal to me like “Pain”, “Understanding”, “Emotions”, “Feelings” etc. can not be proved the way we do proving in scientific methods, but we can observe them within. Had pain etc. can be proved then the problem of “Consciousness” would have been solved long back. Scientific methods of verification fail in these cases. Pain is a reality which you also know within you. You can not prove it to me. If you can then let me know.
What more I can do is, I can do a FMRI scan of your brain and identify what all is going on in your brain when you are pinched on your hand, but with this FMRI what I get is a lot of data about your brain but NOT the “experience” that “what it is like to be pinched on the hand?”. Do you see the difference between these two? Biological systems like brain do not give explanation for “experiences” which we are feeling. This is why these are unsolved problems in Science of Mind and Consciousness.
4. Just because these “realities” like Understanding, Pain etc. are not empirically observable by scientific methods and can not be proved by it, we can not say that they are not “realities”. Calling something abstract doesn’t make it unreal. We observe these realities every moment. Believe me we do. How can we negate it? Proofs have their limitations.
5. I can at least prove to you that a Robot or more specifically a computation model or a program is nothing more than a set of input, instructions and output, which is not sufficient for understanding, meaning, pain, feelings etc. If this is proved to be true then it is also proved that we are not robots, and such a matrix is not possible. Even if such a matrix is possible then it will include “real” human minds to experience it. If we have “real” human minds then there is no argument about pain, understanding etc. And real human minds can not be simulated since they posses capacity to feel, mean, understand which are lacking in a computer program.
6. It might look like a strong statement to call Strong AI impossible, but I would like to make this statement again. In case you feel that Strong AI is possible then kindly explain me where and how is Pain in robot?
I started with robot since it gives a lot of insight into the issue. If you directly jump to humans then we miss a lot of detailing which makes it difficult to talk about humans. I am rather willing to talk about humans but not till the issue of robot is resolved, because I believe that unless this is resolved talks about humans will be governed by our preconceived notions.
7. Generally we feel that Science can do everything. I think we need to understand the limitations of Science as well, which are there in its methodology. Unless that understanding of Science is not there we keep having faith into it that it can do anything.
I totally understand that Chinese Room Argument is not easy to understand. I myself did not understand it since a long time. Believe me Chinese Room Argument is much much more than what we feel it is. I would suggest to have a face a face talk if you are interested. I am generally found in Humanities lab.
With Regards,
Devansh
I think the article is incomplete. The first question to ask and to explain is what exactly do we mean by “pain” for humans? Are we only talking about physical pain or also emotional pain. Eventually all human physical pain must be a response to damage that is happening or likely to happen to any part of the body if the activity causing the pain persists. It must be a defence mechanism that helps in out survival.
If you cannot feel physical pain you would do things like poking yourself with a knife without realizing that it could kill you.
Any self sustaining entity(human or robot) must have a similar defence mechanism built in to guarantee it does not self destruct and also so that it stays away from dangerous situations(basically anything that is likely to cause pain).
So the real question to ask is are humans just programmed to feel pain or are there metaphysical overtones? I dont think there is.
For emotional pain i am likey to go with chemical/hormonal imbalances that are dangerous for the body. But I am no expert.
So to sum it up “Can humans really *feel* pain?”.
Keep writing.
One more thing is “Why do we show facial reactions when we feel pain”. The answer is simple “Why does a baby cry when it is hungry?”
@Nishant:
Thanks for the comment. Good to hear from you after a long time. What are you doing these days?
Following are my responses to your comments:
1. I am considering both the kinds of pains, physical and emotional.
2. I have already answered these concerns in some of the comments above.
3. What I am talking about is the “feeling” part, where this feeling part is sensed in case of robot? What I say is, computational model is not sufficient for pain. Generally we apply the same computational model for human being as well.
4. I agree that behaviorally a robot can be same, that is not at all a problem but if it intentionally same or not is the question. So is the case with child/baby.
Kindly keep human part aside for a while, we will keep robot in front for this discussion, we will come to human if this issue is resolved. In case of humans we assume several things, which doesn’t lead us anywhere.
With Regards,
Devansh
Hi Devansh,
I am doing good here. I quit Midas a few months back to start my own sw company ‘Niamit Tech’. Part consulting part product dev. Former is going pretty well. How about you?
Sorry i didnt read all the comment its just too long.
“We can simulate the “expression” of pain but can we simulate the “feeling of pain” itself?”
We cannot keep human part aside because your article tries to compare human pain with robotic pain. So the obvious first step is to understand what is human pain and the next is to understand why we feel pain.
In linux terms i would say robotic pain would be some sort of “kernel panic”.
But from your comment i think the real question you are trying to answer has nothing to do with pain, it has got to do with consciousness. But then the article is a little misleading.
@Nishant:
I had joined PhD at IIIT-Hyderabad in Humanities department. I am doing good and enjoying it.
“In linux terms i would say robotic pain would be some sort of “kernel panic”.”
This is where we are disagreeing. I think if we reach this to a common conclusion in this line of argument then we can talk about human.
I will take your argument, kindly explain me, where is pain in kernel? What is kernel? Isn’t is nothing but a piece of program, with some inputs, outputs and instructions? Where is pain in the program?
For time being we can go ahead with the knowledge of phenomenon of existence of pain. At least we know that it is there. Why, how we can keep aside and continue the argument. After the argument about program or robot is resolved then we can take the argument further.
You may like to read, Chinese Room Argument as well if you are interested.
With Regards,
Devansh
Hi Devansh,
“I will take your argument, kindly explain me, where is pain in kernel? What is kernel? Isn’t is nothing but a piece of program, with some inputs, outputs and instructions? Where is pain in the program?”
A kernel is designed not to crash/die without explicit instruction. Lets say a robotic kernel is designed never to shutdown on its own.
Let us give a technical term to to any situation that causes the kernel to crash/die as “kernel pain”.
Lets say you plug in a usb device that has a problem(my gprs usb stick) which causes the motherboard/processor to heat up to an extent that the system actually shuts down from over heating after a while .
In practice when this happens my processor fan automatically runs faster( i can hear it). A very elegantly designed kernel(eg my brain) would monitor this temperature(eg my skin) and ask me to remove the usb stick because otherwise something is likely to go horribly wrong. So technically i would say this is some form of “kernel pain”. The “pain” is caused by an overheated processor but it is “understood” not by the processor but by the kernel which eventually reacts to this by asking me to remove the usb device..
So the point is that “pain” is a reaction to a “dangerous situation”.
However when you say “feeling of pain” the important term is “feeling” and not “pain”. To understand feeling you must understand consciousness but for pain you do not. Which is what i say that the real question to answer is what exactly is to understand what we mean by consciousness.
@Nishant:
Thanks for the comment.
What you are calling “Feeling”, I called it “Feeling of Pain” and what you call simply “Pain”, I call it “expression of pain”. I think you agree to the point that in case of a program there is just the expression or computation and not the feeling, this is what I am trying to say!
Anyways, if you agree to the idea of consciousness then I feel the point which I wanted to convey got conveyed. I do not call the mechanical process which you defined for the case of kernel as pain at all. You may like to give it that name. The main thing which I want to talk about is the feeling, which you rightly pointed out. If you agree to the argument that there is something like consciousness required in order to “feel” then I must say, this is what I am saying. We are in agreement then. It also shows that this feeling is missing in case of a robot since it doesn’t have consciousness and what it has just the expression and not the feeling.
One more step I take and say that “Consciousness can not be simulated”, whatever you do. It can not be simulated in a computer program. A computer program can not/never feel. I somehow also feel that you do not agree to this point. Kindly let me know your arguments in such a case. As I suggested having a look at Chinese Room Argument might definitely help.
With Regards,
Devansh
Hi Devansh,
If it is robotic consciousness and whether it can be achieved that your are discussing then I am inclined to say arguments on either side are mere speculation because we don’t understand our own consciousness enough to answer if it can be replicated. I had a look at “The Chinese Argument” but i wouldnt take either side with it.
But here’s some food for thought:
Lets say you write a very complex operating system running on a computer. But it only does one thing: it allows programs within to run inside with no interaction with the outside world. Like all computers the memory and processing cycles are limited for the operating system but lets say they are infinite to our imagination.
Now you put in a very small program. It has two traits.
One, it is programmed to terminate after a specified period.
Two, before it terminates it replicates itself(in multiples) but during the replication it injects itself with some randomly generated code(random in the true sense that it could pick up some part of the code from one of the other programs or from the OS code or just a random 0’s and 1’s). The replicated program may or may not work but if it does work it must replicate and then terminate after a certain period.
If all programs die, the OS again starts the whole cycle.
Now what do you think might happen if you allow this to run for say a billion years?
@Nishant:
Thanks again.
If you associate consciousness with everything and introduce terms like “Robotic Consciousness” then I feel we can not discuss. Anyways, I think without having “complete” understanding of our own consciousness we can talk whether a robot can have it or not. There are some observations like Pain, Understanding and many more which can never be the properties of a program or a machine, so it proves it.
In my opinion Chinese Room Argument has much much more to offer into it than what we understand by it in the first look. I would say my response lies into it.
With Regards,
Devansh
Hi Devansh,
“I think without having “complete” understanding of our own consciousness we can talk whether a robot can have it or not”
… is equal to coming up with a complete answer to an incomplete question. I for one would rather complete the question before taking a jab at the answer. 😀
I must admit I never read your other articles before but I am beginning to get interested in your blog. 😀
Thanks Nishant. Thanks for visiting, commenting and getting interested into the content.
Regarding your argument, if one digs further into the issue one realizes that one is asking a wrong question in an unsuitable context. You do not say that you need to understand consciousness completely in order to prove that stone doesn’t have one. At least when you are aware of some properties of consciousness. On the basis of certain properties we can always build the argument.
Anyways, my response lies in the arguments given in the post, Searle’s argument and in my opinion they are very much complete.
This was my first blog on slightly technical + philosophical side. My other blogs have always been highly philosophical.
Kindly keep visiting and commenting 🙂
With Regards,
Devansh
Actually without understanding what consciousness exactly is, saying stone does not have consciousness is speculation and not fact. Ofcourse its absurd when i say it but is it? 😀
I’ll end our discussion with this comment. Look forward to your future writings.
Cheers and Best Wishes,
Nishant
Thanks Nishant. Thanks for visiting. 🙂
HI All
As interesting the topic is, so is the argument. Im not involved in any such research or studies but i think as a layman i can present my view point.
Going back to the topic – Can Robot feel pain, my view point is, Robots and AI together is an attempt to replicate human/other living beings. Wether we would be able to replicate it 100%, and 100% would mean creation of feelings, feelings similar to living/Human being.
I think, as discussed above we can definitely write complex AI which can precisely replicate human behaviour, but before we can agree on Robots being same as humans , the very nature of HUman/Living being needs to be defined. Without getting into complex theories of Human nature, feelings etc, i think we can only interpret behaviour, feelings etc but cannot write an Algo to it, means we cannot define it. And if we could, then i guess Einstein and Newton who are the two most capable minds so far would have done it. If we are too far from explaining a basic human mind, behaviour, feelings i think we are no where near to explaining these geniuses.
Coming back to the topic of feeling of pain, i think its not hard to replicate it, which everybody agreed on, and to a certain extent Robots can feel, behave, analyse, think, and do lot more then humans, agreeing with concepts of movies like IRobot etc , but this would be just a replication of defined things, BUT AND I QUOTE – HUMANS ARE KNOWN TO BE CAPABLE OF GOING BEYOND DEFNIED, WHICH IS VISIBLE IN evolution process.
Saying that , if i argue my aforesaid point and consider the possibility of existance of much smarter creator of HUman being, unlike human – being creator of Robots, then i think i would limit my discussion here, as i dont want to discuss something which nobody has any clue about.
Resting my case here, and summarising – i think there is no doubts that highly complex AI cannot replicate Human in aspects of work, thinking, analysing, feeling and ability to come up with their own logic, but i cannot see that it can have its own version of feelings, which means it can only have feelings which we will ask it to feel, unlike humans generating millions of feelings lot of which can be from experience, i guess that is only a small portion of all the undefined mechanism, which we call emotions or feelings.
Good luck
Cheers
Anshul
You have attempted to explain why robot cannot feel the pain. The fundamental proposition I see is that pain is not part of “material world” and hence, it cannot be “felt” by robot.
I think the post will be more complete if you explain where you think pain is there in Animals/Human Beings. Is it in the wound or is it in the nerves or is it in the brain or a combination of all these or somewhere else. The possibility of simulation depends on this reasoning !
@Suman Bhaiyya,
Thanks for the comment.
If you notice then I am not making that proposition which you are saying fundamental. My simple question to the reader is, where and how is the pain the robot? Kindly answer that. I do not see any place or process in robot which may be responsible for pain.
With Regards,
Devansh
You might want to read my course presentation on pain. web.iiit.ac.in/~abhilashi/pain.pdf .
Literature from Neurological studies suggest that,
1) Pain is a social phenomenon
2) No single theory has satisfactorily explained it
Humans themselves have a long way to go to understand pain.
To relate to what you are saying ‘where and how can robot feel pain?’, we do not clearly know how it happens in humans. Lot of murky debates going on in various communities over this: there is no agreed definition of pain.
Leaving aside ‘where and how’ of pain, a practical question would be ‘what would be better than a Turing test to verify that a machine has understood pain?’. To answer this, we have to first ask the question, when do we say humans have understood pain?
a) We (mostly) associate pain with situations harmful to our survival. But then we again fall into the same trap of subjectivity, we cannot go further and verify that our robot has understood Pain.
b) We tend to objectively identify that a child is indeed in pain (and when it is not in pain). Can we use this as a modified Turing test for pain?
I end here and I will have think harder myself about this.
Regards.
Abhilash I.
Thanks. I appreciate your comment.
I am busy at this time. I will get back to you with my comments in some time. I think similar arguments are presented in above arguments and I responded to them. Anyways, I will get back later.
With Regards,
Devansh
@Abhilash I:
I think Turing test is not at all sufficient for the “intentional” phenomenon. Behavioral similarity and intentional similarity are two different things. This is what Chinese Room Argument by John Searle proves that Turing Test has its limitations.
Different models of how a human mind, brain etc. are working are proposed in past in science and philosophy. We can at least find limitations in them and say that this is not it. This is what is called the method of elimination, by which we can prove that current model is not sufficient to explain the phenomenon without ourselves knowing what will! This is what is I am trying to do here. I am trying to prove that ‘Computational model of mind’ is NOT sufficient for realizing pain. The thought experiment which I have proposed in my blog helps in understanding it.
Turing test has already been proven very limited.
With Regards,
Devansh
Devansh,
I am also a MS student in IIIT hyderabad.Hope you must be knowing Bipin sir.I attended Cog. science course and i learned pretty good detials of brain and learning and meaning
I read the whole discussion, and i found whole discussion very funny (forgive me)..you may say wat you found funny is XYZ to many brilliant brians and scientist in this world …i agree …but to me its funny..I will explain below as why??
Also i request pl make your mind neutral while taking in this content..biased minds often doesnt produce any results except DISCUSSIONS!!!!Ofcourse we dont want our forum for mundane discussions we want some result out of it!!!All my effort would be not to make ur mind biased on my thought process but rather make it neutral so thought process will keep moving and its not stuck at any school of thoughts whcih are not proved (again prove is vague term,we beleive it proof cos we feel we cannot explore beyond that)!!!(M doing this cos As world needs best brain like you have..we had very few after einstein)
You and Me are more into computer world ..We need to go into different world to get ans for all this..We need to undertand anatomy of brain first..As our ultimate aim is to build Robot which simulate human brian …So first understand anatomy of brain in depth upto that extend that we understand how signals flows …atleast as far as a neurologist knows…Only then we can simulate and then later we can duplicate..from body to brain
(
Now plz dont answer that we would not discuss human part in this forum…plz no..cos until and unless we dont understand that in humans too ….feelings ,understanding meanings ARE DRIVEN BY SIGNAL and WAVES ELCTROMAGNETIC WAVES…(Explained in detial in “Phantom in Brain”
)
Also you need to understand how these signals flows..Its very similar to calling a subroutine of OS..When we touch or feel something (simutaneously 30 parts of brian communicates simutaneously..jst like subroutines)
Firstly, let me write what my understanding is about your perception:
1. You say we must beleive that science has some limitations..Science can simulate but could not give 1)
Emotions 2)Feelings 3)Learnings/Meanings(Dynamically)
Machine ..What all can happen in a machine is nothing more than symbol crunching which can not ensure understanding.??? right!!
If yes then please answer under what circumstances,conditions and behviour you would say that machine has feelings??Please elaborate..
(Do robots produce enzymes while having emotions???surely not..i think ur expectation is robots should take decision more dynamically rather than being programmed)
If you say after seeing word tree it would not b able to make decision then its not correct even that can be done so
If you say emotions doenst work on set of rules…Then i can prove it wrong
If you say emotions cannot be measured and has no syntax but has semantics then its cos differnt human has different level emotions hence its very vague..Then i have explainaiton for this but before that pl understand this (Pl dont say that we wont discuss about human..I m nt discussin humans but i will fianlly give pointers for robots after discussing this) …Imagine a person who was left aloof form childhood to his adulthood…say a stone age man..We know they were lacking emotions..(though sensory feeling would be there like if we clinch his hand or beat it with hammer he will try to save his hand but emotions are something different than that and it comes from learning from society and environment nearby…Imagine this stone age man loosing his companion will not cry on his death but we may if we loose anyone)
So we can say emotions are driven by environment as one factor!!
Now pl image this
We create a township of robots..Where we have 813 robots.Each robot has Machine Learnign algo and diferent learning curve..(This is feaseable right!!)…Now leave them for 10 years..
They will interact with each other and they will learn from each other..Now each robot will learn different ly..
Based on learning they will react(Even this is feaseable in science…Reactions based on learning)
Now you go to meet robots after 10 years…To one robot you say my friend died in road mischap this robot may react as saying good he was XYZ fellow…To another robot if u say then it can say i feel sad about it
based on his learning
On that day you may to some extend can agree that now robot have emotions…So varying behaviour is one of the factor of emotions
Again you may argue why robot didnt produced enzymes like human do …during emotions…. that time robot will discuss why don’t human produce signals during emotions…???
Please explain what should we create which will say that robots started feeling emotion and its not mere simulations
Also you said
“Computational model of mind is NOT sufficient for realizing pain”
Since you gave this statement Then we beleive that u know what is pain.Please throw light as what is pain and why dont you think that pain which human feel is not simulated..Do you feel during pain something very different happen in unbounded universe which cannot be understood or is it like some chemical reaction which shld also happen in machines?? (Pl. i didnt found any ans in repvious ans so dnt say that ans are given in previous updates..i read all updates 3-4 times)
Dear Paresh,
I am happy to see your comments/questions/suggestions on my post. The last point on which you talked about is the crux of my post and I would like to start from that.
“Computational model of mind is NOT sufficient for realizing pain”
I can not give a formal definition to pain. It is beyond my capabilities. If you can then kindly give. But that never implies that we can not discuss about pain.
When you expect me a formal definition of pain then what do you expect me to give you? That how is it realized in the signal processing or hormonal discharge itself? I would say pain is something more than that and there are sufficient evidences in Cognitive Science course itself, which you can also find and which I also found when I did it last year under Prof. Bipin.
Generally in science whenever a phenomenon is talked about which is not formally defined but is observed to be happening then we start with its “folk” definition and start the discussion and in course of this discussion if the definition can be revised then it is done. Similar thing I am trying to do here. I am starting with the folk definition of pain i.e. “what you feel when your hand is pinched”. If you say that you do not feel any pain then I will ask, does phenomenon of pain exist for you or not? If you say the phenomenon of pain itself doesn’t exist for you then we can not have a discussion about a thing which we both together do not have clarity about. If you say that you feel something like pain or phenomenon of pain exist for you then we can talk.
Now lets start with this “folk” definition of pain and try to see whether a robot can realize pain? If you say “yes it can” then kindly give supportive arguments. I say it can not! or if I become specific then I will say that a computational model is insufficient to realize pain. In support of this argument I propose John Searle’s Chinese Room Argument.
I also believe after having so many discussions with so many people on the same topic that such discussions when done face to face rather than on a place like this then they can be more fruitful. It will be my pleasure if you are willing to meet me in person.
I am generally found in
Center for Exact Humanities, above SH2
Room: C3-303
With Regards,
Devansh Mittal
PhD Student at Center for Exact Humanities.
With folk defination Robot can say Yes it has felt pain..!!!
Now if we say that we cannot call subroutines and sensors as pain displaying tools and its only chemical reaction which creats pain then discussing all this is immaterial!!!!
cos machine cannot produce chemical reaction presently…BUT even that can be simulated.
All i want to say is..EVEN HUMANS ARE SIMULATIONS WHERE SIGNALS PASS THOUGH WAVES FROM MASCLES TO HIPPO CAMPUS TO TEMPORAL LOBE AND OTHER PORTION OF BRAIN
When we give anasthesia we dont feel pain….Just like robot if we disable any subrouting it wont react..
Also please comment on my robot simulation where we create township of robots and when you meet these robots after ten year …what would be your feelings??
Dear Paresh,
Thanks for your comment again.
Kindly see the difference between simulation and duplication. We can simulate the expression of pain in a robot, but I am only saying that it is not possible to create the “feeling” of pain.
Secondly, when you say what you have said about humans in upper case, then what is the basis of that “knowledge”? There are many more counter arguments to it. I would say that is your assumption.
As I said earlier, from my past experiences, I believe having a face to face conversation will be a better idea if you are willing to. With a discussion in person, we can explore together and come to a common conclusion or agree to disagree.
With Regards,
Devansh
PS: In relation to your thought experiment of city of robots, and uncertainty of behavior of robot, I would just say, uncertainty in the behavior of robot never implies any presence of emotions/feelings. In fact behavior itself is an insufficient parameter for evaluation of “internal” things like emotions/feelings/pain etc.
I respect your point of view but still you didnt gave explaination as when can we say i.e under what circumstances we can be say that robots have pain..?
Should they produce enzymes or what???
If you dnt have ans then i think We should change our direction and start researching on this.
Also we should change then the heading of this forum as “What is pain” as no one knows this ans till yet ..
I will come to meet you face to face but before that i want to do my homework so that meet would be fruitful..
Dear Paresh,
Thanks again for the comment.
I do not know in what circumstances robots will feel pain! What all I am trying to say is, a computational model is insufficient to realize pain. You see I am not making any assertive statement. I am making a negating argument and I am giving proofs to it as Chinese Room Argument.
You are welcome anytime to my lab. I will mail you my mobile number so if you call me before coming then I will ensure my presence.
With Regards,
Devansh
Hi
Below post is anatomy of pain which human feels it explains how pain is different than simple program..i am posting it cos for all who are connected to this forum will find it useful for thought process of this topic..
Excerpt taken from one of the site:
Let’s say when you step on a sharp rock and you feel pain Below is the anatomy of it as what happens
First sensory nerves send impulses about what is happening in our environment to the brain via the spinal cord.Spinal cord does more than act as a message center: it can make some basic decisions on its own. These decisions are called reflexes from dorsal horn…The brain does not have to tell your foot to move away from the rock, because the dorsal horn has already sent that message. Even though the spinal reflex takes place at the dorsal horn, the pain signal continues to the brain. This is because pain involves more than a simple stimulus and response.
When the pain signal reaches the brain it goes to the thalamus, which directs it to a few different areas for interpretations. A few areas in the cortex figure out where the pain came from and compare it to other kinds of pain with which is it familiar. Was it sharp? Did it hurt more than stepping on a tack? Have you ever stepped on a rock before, and if so was it better or worse?
Signals are also sent from the thalamus to the limbic system, which is the emotional center of the brain. Ever wonder why some pain makes you cry? The limbic system decides.
Feelings are associated with every sensation you encounter, and each feeling generates a response.While it may seem simple, the process of detecting pain is complicated by the fact that it is not a one-way system. It isn’t even a two-way system. Pain is more than just cause and effect:
Other Effects:
Your heart rate may increase, and you may break out into a sweat. All because of a rock underfoot.
Your mood, your past experiences and your expectations can all change the way pain is interpreted at any given time.If you step on that rock after you have a fight with your wife, your response may be very different than it would if you had just won the lottery.Your feelings about the experience may be tainted if the last time you stepped on a rock, your foot became infected.
If you stepped on a rock once before and nothing terrible happened to you, you may recover more quickly. You can see how different emotions and histories can determine your response to pain.
So these are some emotions which we can say something more than program and this happens due to our learning and billions of years of evolution!!
Now imagine if we imbibe all these to robot!!We can!!Yes We can!!!
(Hence understanding human anatomy is primitive.If Machine Learning and other complex algo is thought to Doctors they can make better robot than a computer scientist)
Well written and well thought,
I have few questions:
1. Where do we feel pain? (If we are hurt on little finger of right hand, do we feel pain on little finger of right hand? or do we feel pain in our mind? If we feel pain in our mind then how does it relate to the hurt location? what is the mechanism which makes this connection which tells you that the pain is in little finger of right hand and not in little finger of left hand? if we feel pain in the hurt location then Bingo..we have located the mind? because we know where we feel pain…right…?
2. In your example of robot, the assumption you made is that we know that it is a robot. Suppose, If we do not know that the person you are talking to is a robot i am sure there is no way to find out if he is a robot or not. if this information is not given to you, how on this earth are you going to confirm that this person is a robot? will you have his X-ray…to determine if he has body parts? Remember…if you do that then you are defining a man with his body which is again against your argument and if you are not going to define him with his body then what is the method of finding that he is not a human being and a robot?
Thanks for the comments Amitash Bhaiyya!
I will try to respond to them.
1. I am not at all interested in locating pain in a human being, at least in the context of this post. As I mentioned many times in earlier comments and post also that I am limiting myself to a machine and not talking about a human being.
2. It is given that what we have is a robot or a machine and then we are trying to understand if we it can have pain. My task in this post at least is just to show that a machine or robot can not have pain, which I did. In case you have any comment on this then kindly put 🙂
I accept and appreciate your arguments that it is difficult to locate pain in human and it is difficult to identify a robot from a human at least from the behavior. Probably for the set of arguments you have put I will try to give some different set of thought experiments which may probably take us to better understanding of “pain” and “human”.
Sorry, but it seems you’re mostly avoiding the questions and saying you’ve already responded in one of the countless replies here. And also limiting the discussion arbitrarily by saying that you don’t know what pain is, or what should be emulated in a robot so that you’d be convinced that robots can feel pain (because certainly human nerves and brain is just electrical signals and chemistry, something that could absolutely be simulated with enough understanding and technology, and it’s only the complexity that creates the illusion of self-awareness), and yet you’re defining exactly what should be discussed, i.e. human beings should be left out. Seems just silly, doesn’t get us anywhere.
@Shuzu:
Thanks for your comments.
“(because certainly human nerves and brain is just electrical signals and chemistry, something that could absolutely be simulated with enough understanding and technology, and it’s only the complexity that creates the illusion of self-awareness”
I would say this is your assumption. What you have said in above paragraph is nothing more than your or an available hypothesis.
There is no strict/formal definition of pain existing. While limiting ourselves to a machine or a computation model also we can talk about pain while taking its folk definition, this is the beauty of Chinese room argument.
http://plato.stanford.edu/entries/chinese-room/
A man sitting in a room doesn’t feel any pain, neither the rule book, nor the room in which man is, so where is pain in the Chinese room setup?
Kindly read Chinese room argument and then answer my question which I have asked above, then probably we can proceed the talk.
With Regards,
Devansh
Pingback: What it is like for a Robot to Feel Pain? | Tribulations of A Fledgling Mind
Pingback: Simple Explanation of Chinese Room Argument: Computation “In Principle” CanNot Realize Human Intentional Phenomena | Tribulations of A Fledgling Mind