Correct me if I am wrong and.I do realize I am possibly being naive but none of these AI machines were built without a human. A human designed its code and told it what to do. It can only learn within the parameters of what a human has coded it to recognize. You can make algorithims and code to make a machine interpret, access and react but how does a machine do anything more than what its told. In other words, these machines can only evolve to what we can code them for. I dont see how a machine can teach itself something it does not recognize. At some point the robot must reach a "does not compute" moment when it reaches the limits of its programming.
All that being said, I will do some research to enlighten myself more on any new tech I may have missed including watching the ted talk that is linked, but I personally have not seen any AI that is anything like what is seen in Black Mirror.
Yeah if you're interested in the subject (and I think everyone should be), I would read more about it. Of course we don't have Artificial General Intelligence now, but we are quickly approaching it. Google's Deep Mind, for example, has been programmed to "learn" how to beat video games. This doesn't sound like much but it figured out how to do so on it's own. It wasn't programmed to beat them. It came up with strategies on its own.
A GENERAL intelligence would be able to use its intelligence to solve any problem, not just those it was programmed to. AI gets better and better every year. Eventually, its general intelligence will exceed that of humans. You will almost certainly live long enough to see that day. No one who studies the subject thinks AGI will NEVER happen. Some just think it will be a century or more before it comes online. I think that's wishful/pessimistic thinking.
Fine, but what does this have to do with rights? Once the machine answers the questions we give it, it won't start asking questions of its own - unless we tell it to and provide instructions. There's nothing to which we can give rights.
A GENERAL artificial intelligence certainly will ask questions of its own. That's what the field of "machine learning" is all about. As far as rights go, that's an open question. Do you think an intelligence that is equal to humans deserves rights? I get that it's a computer, but there's no evidence there's anything "special" about the human brain. It's made out of atoms, just like a computer.
I have no idea if a machine that is smarter than humans in every way would be conscious. That's a hard question. I actually can't PROVE anyone or anything other than myself is conscious.
My calculator ate my homework!
In the future, your calculator might just eat you!