Never Trust Someone Who Worries About Computers Behaving Badly (as Opposed to Malfunctioning)

This is an absurd article:

EXCLUSIVE: Dr David Levy told Daily Star Online it will be important to teach the robots of the future about consent, because they will have their own sexual desires

Presumably they mean this David Levy, who has a PhD in some unrevealed subject, and he is promoting an old book based on his dissertation called Love and Sex With Robots. If this got him a PhD in computer science, some computer science department needs to have its charter reviewed, because he reveals a shocking level of ignorance about how AI works.

Levy presents an image of AI where the computer has its own drives, and like a human is able to find novel solutions to satisfying those drives–neither of those things is true. Computers do not have drives. You don’t program a computer to want to calculate an orbit to the moon to make the computer do that, and you don’t program a computer to want to simulate sex to get the computer to do that.

The computer is not feeling anything like pleasure or the satisfaction of a drive; it is just adjusting the position of a collection of stepper motors in response to the input of various sensors, and it can only do that to the extent that it has been programmed to make the right adjustments based on the right inputs. A robot has no capacity even to subdue someone unless you have programmed that in. A robot that has not been programmed to hold someone down cannot hold a woman down and rape her.

There are some caveats to my claim that computers can’t come up with novel solutions. Machine learning is a technique where you program the computer to try random actions and record the outcomes so it “learns” what those actions will do without a programmer specifically having to program that “knowledge” in. This is a very helpful programming technique when the actions and results happen at computer speeds because it saves a lot of time; the computer can try millions of actions in the time it would take a programmer to program just one. But the computer has no ability to judge what to try, and it doesn’t even have the ability to try anything that hasn’t been programmed in. In particular, if the programmer has not programmed the computer to try restraining someone, it will never try that.

And it isn’t likely machine learning could be used for something like teaching a robot to interact physically with humans, because it can’t try millions of things per minute. Imagine getting a robot to learn how to simulate sex. You would have to program in all of its potential actions, which would involve moving body parts by a fraction of an inch at a time. A quick perusal (by me, I haven’t looked it up) suggests that with multiple degrees of freedom for many joints, there are about 36 dimensions just in the major joints (not counting fingers and toes). You have to try all combinations of motions, both up and down, so at any point, there are at least 2^37  or 137,438,953,472 different possible motions to try–and each motion only moves body parts by a fraction of an inch.

Adding to the difficulty, you are trying to get the robot to please a human sexually, so there has to be a human involved in the testing to indicate whether the robot is successful or not. This is not a viable technique, so sex robots will not be programmed using machine learning–at least that won’t be the primary technique, and no robot will ever learn to subdue a human by machine learning–it would have to be explicitly and deliberately programmed in.

Now, it’s probably possible to program that in. A robot could potentially be programmed to restrain someone while simulating sex with them, but that would be deliberate programming, and there is nothing to prevent it that would correspond in any way to making the robot understand consent.

Another possibility is that a robot could possibly be programmed both to subdue an intruder and to simulate sex, and there could be bugs where it would combine these two functions with a result that the robot would involuntarily restrain someone and then simulate sex with them. But the solution to that potential problem is not to program the robot about “consent” either; the solution is some redundant lockout mechanism in the software that prevents both restraint mode and sex-simulation mode from being active at the same time.

The entire notion of programming a robot to understand consent is ridiculous. You can’t get a robot to genuinely understand anything. They respond to their sensors in ways that they are programmed to respond. Could you program a robot to sense a person’s behavior and respond in ways that would make the robot change its actions based on that person’s behavior? Probably, but describing that as making the robot understand “consent” is absurd.