Post by Pincho Paxton on Jun 3, 2016 22:06:33 GMT
Link...
The Argument For Robots That Can Think, Decide — And Kill
Pincho says...
Can you see what he did there? He switched the argument! It's like this problem that I had at work...
Underage Gambling Conundrum
Quite often, if you have a low intelligence you will switch an argument without realising it. It might make you feel that you are on to a new thing, and have solved a problem, but actually you have changed the message into a different message, and you can't tell the difference.
Artificial Intelligence is easier to understand if you have programming knowledge, and have programmed a Neural Network. At that point you have the first stage of the information that you need. Then you should study the brain, and you should study physics, and fractals in nature. A lot of this data requires that you are a genius to put it all together.
But don't imagine the Terminator...
...the Terminator is the end result that starts off as something far simpler, and because this argument is at a lesser stage you are not meant to be leaping ahead to the Terminator.
The minimum Real Ai can be dangerous, because real Ai has freedom of thought. That freedom includes thoughts of murder, otherwise the Ai is controlled by humans, or limited by humans. The Ai could learn how to loop around the human limit programming, its safety protocols.
Lets say that humans have a safety protocol not to set themselves on fire...
...humans do set themselves on fire.
That is to go around your safety protocol, and drones are a different argument.
Pincho Paxton
The Argument For Robots That Can Think, Decide — And Kill
That’s partially because these robots have yet to be defined. But don’t imagine the Terminator. Think instead of drones that can pilot themselves, locate enemy combatants and kill them without harming civilians. The battlefield of the near future could be filled with these so-called lethal autonomous weapons (commonly abbreviated “LAW”) that could be programmed with some measure of “ethics” to prevent them from striking certain areas.
Pincho says...
Can you see what he did there? He switched the argument! It's like this problem that I had at work...
Underage Gambling Conundrum
Quite often, if you have a low intelligence you will switch an argument without realising it. It might make you feel that you are on to a new thing, and have solved a problem, but actually you have changed the message into a different message, and you can't tell the difference.
Artificial Intelligence is easier to understand if you have programming knowledge, and have programmed a Neural Network. At that point you have the first stage of the information that you need. Then you should study the brain, and you should study physics, and fractals in nature. A lot of this data requires that you are a genius to put it all together.
But don't imagine the Terminator...
...the Terminator is the end result that starts off as something far simpler, and because this argument is at a lesser stage you are not meant to be leaping ahead to the Terminator.
The minimum Real Ai can be dangerous, because real Ai has freedom of thought. That freedom includes thoughts of murder, otherwise the Ai is controlled by humans, or limited by humans. The Ai could learn how to loop around the human limit programming, its safety protocols.
Lets say that humans have a safety protocol not to set themselves on fire...
...humans do set themselves on fire.
That is to go around your safety protocol, and drones are a different argument.
Pincho Paxton