As we all know, Robotics is a very controversial topic. There are many sides to the argument, as the field of Robotics is not fully developed enough for a clear division to come forth. Some of the main issues with Robotics and Artificial Intelligence are: Fears that Robots and/or AIs will outsmart the human race, and somehow become determined to attack and/or exterminate humanity. From Wikipedia Robot Article: Obviously, some people believe that robots can and will outsmart we humans, but others believe that they will forever be a subservient species. Hence, a debate. Artificial Intelligence and Consciousness Artificial Intelligence and Artificial Consciousness are two different things. AI is simply the ability of an object/machine/something to act on its own. AC, however, is the ability of something to be like humans and live the same experience we do, thinking and feeling. Of course, feeling is a subjective term, and is up for interpretation. AC is not currently achievable (to our knowledge), but AI is completely and totally possible, and is even being used now in that game you might have heard of, Halo. From Wikipedia Artificial Intelligence Article: From Wikipedia Artificial Consciousness Article: As the consciousness argument states, we cannot prove or disprove AC until we figure out what consciousness is in the first place. This argument touches many different levels, and isn't fully developed yet. Assuming Artificial Consciousness is possible, is it ethical to have conscious machines/androids/robots be a subservient species? As with most arguments, ethics is an integral part. In no way can this question be backed up with facts, so this is open to interpretation. If we were to build a sentient being that can process information and view the world exactly as we can, would it be ethical for it to serve the purpose of doing menial labor? Would it be okay, or would it be a form of slavery? Because a being is sentient like us, should it be guaranteed the same rights humans are given? Overall, this entire subject is split into many debates and sub-debates. What are your thoughts, ForgeHub? Will robots take over and kill us all? Can we even create robots that are as smart as us? Should we treat them as equals, or as second-class beings?
A robot, no. But if we ever got to the point of designing AI, then it may be possible. +rep because i love this kinda stuff!
We already have designed AI, read the OP. I don't think it will ever happen. I, Robot is an awesome movie, but not realistic.
well i thought that the meaning of life was to create robot overlords who will enslave humanity, right? But back on topic, to the person above me, this could happen in the near future, ever watch the discovery channel?
Robot Immortality Hopefully we will be the robots you speak of. Pace makers, prosthetic limbs, and artificial eyes have already brought humans closer to our cyborg destiny. Once we understand the way the brain stores and transmits information you may be able to learn new knowledge by queing it for download to a chip in your brain. We are closer than you think: Mad Neuroscience: Prosthetic Speech Implant Turns Your Thoughts to Words
I think you wouldn't use a robotic sentient being to do menial labor. I would assume at this point in time we'd just have advanced AI units to do menial labor. I'm not even certain what purpose a robotic sentient being would serve. The only reason I see a robotic sentient being to exist would be to fullfill the human desire to be "God-like" by creating another life form. Otherwise we can just use more advanced AI that we have now.
and also, if these so called robots did that kind of work, many people would lose their jobs and thats a bad thing.
I agree with Draw as he posted above. As for this kind of thing happening in any of our lifetimes, I highly doubt it. But this is definitely an interesting philosophical, ethical, and scientific issue to ponder. For those of you interested in this kind of thing, I highly recommend the movie Bicentennial Man with Robin Williams. In the film, he plays a robot which somehow acquires Artificial Consciousness. In his quest to become more human, he starts to replace artificial parts with human organs. Eventually, he chooses death over immortality so that he may really experience the human condition.
Robots taking over the Human race also surfaced up in (my Pixar favorite) Wall-E. Im not gonna get into detail, but it occurs. As for the possibility of it, I do not know. But it see it as being highly unlikely that it would.
I also want to share something I read about AI. Computers can already be programmed to evolve, in a sense. Programmers would present the computer with a scenario, and have it generate different computer models as solutions. I forget the example that was used. Anyway, the computer would create like 100 different solutions, and then run them through a variety of tests. It would take the top 10 performers, and mix them together (very much like natural selection in evolution). Then it would generate 100 second-generation "offspring" of the 10 ten performers, and test those 100. Again it would take the top 10 performers, and mix them together. After a few "generations" of models, it has one kick-ass, efficient solution. It can even replicate genetic mutations by adding in some random changes to a few of the models. I don't know much about computer programming or engineering, but it was a very interesting article. If a computer can learn how to improve its own processing power, memory, speed, etc., will it eventually gain consciousness? If so, when?
Furious ...! Why don't you find that article? That sounds like a good read... As for a total non comment here, Computers in my mind don't really improve ..just the design of them do...
Agreed. That'd be an awesome thing to add to my OP. If you get the link anytime soon, just post it here.
As long as we dont mess with newtons law of robotics for one reason or another we should be ok. Frankly the fear that robots will exterminate humanity is simply an irrational fear of the unknown. Robots are logical.There is nothing logical about expending +destroying resources to exterminate something when complete extermination is all but impossible without killing EVERYTHING. Im all for develping robotics but theres a line that shouldnt be crossed.Much like cloning there are too many moral issues with AI's. For instance making them as mass producable soldiers then sending sentient life off to die
i think AIs will probably take over the blue collar work force but i dont think anyone will progarm them to think in paradoxes (I.e I robot)
If we did make sentient robotic beings in addition to the "dumb" robots we'll inevitably make to do our dirty work and hard labor, wouldn't it be logical to assume that said sentient robots would raise a stink over the exploitation of their "dumb" robotic brothers? As Draw said, I'm sure the only reason to make a sentient robot would to be to fulfill the dream of "Godliness," but in truth all it takes is one sentient robot to start a chain of events leading to "robot rights" and other such things. Hell, we can't even get human rights to work now. Imagine what we'd do when having to deal with another species? Robot marriage? Legalization of robo-juana? A robot-fuel crisis? IMO, we shouldn't dabble in the creation of sentient robots, as they'd be more problematic than helpful.
I don't believe that there is such a thing as "artificial" intelligence. The way AI is supposed to function is: "If X happens, respond with response 1." x a billion. Isn't that the same way our human minds function (on a much more complex scale, of course). The only differences are the levels of complexity, intelligence, and physical components of the "minds." Should someone create a machine with intelligence comparable to a human being, then it should be treated like a human being.
I my mind, no robot with advanced AI will decide to eliminate human kind. We develop the software they run on. It's not like this will happen;
But thats not really the problem. I dont think that anyone has mentioned yet, the reason that real AI is so far off is because the human brain and the computer "brain" function in entirely different manners, which is what makes computers so useful. There would be no real reason to make a true AI as it would be useless, because we already have that. Simply put, computers are useful because they are different than people, if they become to much alike they will be pointless. that is not to say however that a computer with simple reasoning skills is not beneficial.
I think that if humans were ever to build robots with the ability to act and think like humans. They would have to be treated like us to aviod a war or killing. Also halo has AI but not true AI because if the game had true AI then it would eventually outsmart you because the AI today is programed with a # of actions and for every action you can do in the game there is a equal rection from the enemy {AI} but true AI is able to learn from its mistakes and create actions of its own. Example if you flank a enemy you will not beable to use that exact same move again because the y will have changed their stratigy to prevent yours from working for example the could now travel in packs and one watches the front and the other the back so if you combine that with a one shot kill realistic shooter game eventually the machine would win if it was true AI it is only a matter of time