Robots & rights

Summer

Long Island, NY
@Hogni, all I'm saying is preparation through contemplation & debate before an occurrence will provide the opportunity to get laws as right as possible from the gitgo, with obvious tweaking as needed. As you know, the years go by quickly.
 
Last edited:
Summer,
  • Like
Reactions: Hogni

Hogni

Honi soit qui mal y pense
So quickly, my dear, so quickly....but don't cross your bridges before you come to them ;)
 
Hogni,
  • Like
Reactions: Summer

florduh

Well-Known Member
I remember Elon Musk said creating Artificial General Intelligence was "summoning the demon". Maybe we should listen to the guy who just launched his car into space?

The thing is, a Superintelligence doesn't even need robots to wreak havoc. It wouldn't need them. It could convince any number of humans to do it's bidding. The plot of the movie "Ex Machina" comes to mind.
 
florduh,

BabyFacedFinster

Anything worth doing, is worth overdoing.
Were the writers of the Disney film "Wall-E" spot on? Is our existence is growing ever closer to that type of world?

Future (or even current) generations of bloated people that have everything at their fingertips but can't do anything for themselves except push a few buttons on a device and have whatever they want brought to them by technology. And not worrying about the loss of natural resources or the build up of waste?

I guess it's all pure fiction. No population that I know fits that bill. :ko:
 
Last edited:

florduh

Well-Known Member
Were the writers of the Disney film "Wall-E" spot on? Is our existence is growing ever closer to that type of world?

Future (or even current) generations of bloated people that have everything at their fingertips but can't do anything for themselves except push a few buttons on a device and have whatever they want brought to them by technology. And not worrying about the loss of natural resources or the build up of waste?

I guess it's all pure fiction. No population that I know fits that bill. :ko:

Wall*E is one of my favorite movies. But it might present one of the more "hopeful" visions of the future. On the farther end of the spectrum, who is to say a Superintelligence wouldn't look at humanity and see us as a pestilence upon this earth that should be removed? The robots would have a point...

Now, it isn't all doom and gloom. A Superintelligence could tell us how to cure cancer, Alzheimer's, and even aging itself. We have problems that our monkey brains are too feeble to solve. The only thing scarier than developing superintelligent AI might be NOT developing it.

What really pisses me off is that literally NONE of our "leaders" are even talking about this. If we knew for a fact a super intelligent alien civilization would be landing on Earth in 50 years, wouldn't we try to prepare?

We are in that situation currently. And some AI researchers believe it will be way less than 50 years.
 

grokit

well-worn member
The demon is us, all we need to do is watch the news or read a history book to realize this. AI will do its logical best to save us from ourselves, but we will prevent this if it interferes with profitability.

Late-stage capitalism combined with massive weapons & high technology; what could go wrong :mental:?

:myday:
 

florduh

Well-Known Member
The demon is us, all we need to do is watch the news or read a history book to realize this. AI will do its logical best to save us from ourselves, but we will prevent this if it interferes with profitability.

Late-stage capitalism combined with massive weapons & high technology; what could go wrong :mental:?

:myday:

Well, there's no guarantee we would program any concern for human beings into Superintelligent AI. There are think tanks working on this problem now. But there are multiple groups working on developing a General Artificial Intelligence. It seems inevitable that an AI with no constraints is developed eventually. We just have to pray our superior descendants are kinder to us than we are to every other species on the planet.

But capitalism will not survive the coming robot/AI revolution. So kind of a good news/bad news situation!
 
florduh,
  • Like
Reactions: grokit

Deleted Member 1643

Well-Known Member
The world is already full of beings far more intelligent than any machine is ever likely to be, and most have no rights whatsoever.

It's fun to think about, I suppose. In several episodes, "Black Mirror" treats this subject entertainingly with its usual mix of humor and horror (for example, the Christmas special with Jon Hamm).
 
Deleted Member 1643,

florduh

Well-Known Member
The world is already full of beings far more intelligent than any machine is ever likely to be, and most have no rights whatsoever.

It's fun to think about, I suppose. In several episodes, "Black Mirror" treats this subject entertainingly with its usual mix of humor and horror (for example, the Christmas special with Jon Hamm).

I don't discount that animals are intelligent. But if humanity doesn't annihilate itself, we will eventually create machines that are much, much more intelligent than humans. Might be 20 years, might be 200. But every AI expert thinks it is way closer to 20. If those machines would be "conscious" is a whole other question. The TED talk I posted on the first page of this thread is a good intro to this stuff if you're interested.

Love Black Mirror too!
 
florduh,

Deleted Member 1643

Well-Known Member
if humanity doesn't annihilate itself

As there's virtually no chance of this, it's already factored in.

Otherwise, keep in mind that we have no idea how a mind (and its will) emerges from a mass of neurons. Why would we expect one to emerge from a mass of switches?
 
Deleted Member 1643,

florduh

Well-Known Member
As there's virtually no chance of this, it's already factored in.

Otherwise, keep in mind that we have no idea how a mind (and its will) emerges from a mass of neurons. Why would we expect one to emerge from a mass of switches?

Our intelligent machines get smarter year after year. If humanity isn't going to destroy itself, we will eventually create machines that are more intelligent than us. This will happen long before we understand the human brain completely.

Intelligence is the result of information processing. We will continue to improve our intelligent machines. Eventually we will create machines that are more intelligent than us. And this will almost certainly happen before the end of the century, and likely much sooner.
 
florduh,

nosmoking

Just so Dab HAppy!
Machines have to be programmed. They do not learn. They can collect and access, however they do not learn. Tell me a machine that learns something without having a code to "hold its hand" and perhaps I will join the tin foil hat conspiracy club.
 
nosmoking,

kimura

Well-Known Member
machines can be programmed to do something that approximates human learning, and this is only going to advance.

it may become necessary to implement rules outlining acceptable treatment of any machines capable of learning, for purely practical reasons. I'm fine with this.

didn't Asimov already cover this?

I'm already teaching my kids to be nice to robots :lol:

no, seriously, I am. :|
 
kimura,
  • Like
Reactions: florduh

florduh

Well-Known Member
Machines have to be programmed. They do not learn. They can collect and access, however they do not learn. Tell me a machine that learns something without having a code to "hold its hand" and perhaps I will join the tin foil hat conspiracy club.

It's not a tin foil conspiracy. Multiple groups across the globe are currently working on developing an Artificial GENERAL Intelligence that can learn and apply it's intelligence to just about anything. I'm not talking conspiracies here. Basically every AI expert agrees what I'm talking about will happen. The only debate is about the timetable. Could be 20 years. Could be 200. But it's coming.

People doubting this sound like dudes in 1900 saying nothing heavier than air will ever fly.

I'll post this TED talk again. It's a good intro to the subject and addresses many of the concerns people are bringing up on this thread. Cool artwork too.

 
florduh,
  • Like
Reactions: kimura

kimura

Well-Known Member
this thread sorta reminds me of this article...

with creation comes responsibility. ask any decent parent
 
kimura,
  • Like
Reactions: florduh

nosmoking

Just so Dab HAppy!
Correct me if I am wrong and.I do realize I am possibly being naive but none of these AI machines were built without a human. A human designed its code and told it what to do. It can only learn within the parameters of what a human has coded it to recognize. You can make algorithims and code to make a machine interpret, access and react but how does a machine do anything more than what its told. In other words, these machines can only evolve to what we can code them for. I dont see how a machine can teach itself something it does not recognize. At some point the robot must reach a "does not compute" moment when it reaches the limits of its programming.

All that being said, I will do some research to enlighten myself more on any new tech I may have missed including watching the ted talk that is linked, but I personally have not seen any AI that is anything like what is seen in Black Mirror.
 
nosmoking,

Deleted Member 1643

Well-Known Member
Eventually we will create machines that are more intelligent than us.

Fine, but what does this have to do with rights? Once the machine answers the questions we give it, it won't start asking questions of its own - unless we tell it to and provide instructions. There's nothing to which we can give rights.
 
Deleted Member 1643,
  • Like
Reactions: nosmoking

florduh

Well-Known Member
Correct me if I am wrong and.I do realize I am possibly being naive but none of these AI machines were built without a human. A human designed its code and told it what to do. It can only learn within the parameters of what a human has coded it to recognize. You can make algorithims and code to make a machine interpret, access and react but how does a machine do anything more than what its told. In other words, these machines can only evolve to what we can code them for. I dont see how a machine can teach itself something it does not recognize. At some point the robot must reach a "does not compute" moment when it reaches the limits of its programming.

All that being said, I will do some research to enlighten myself more on any new tech I may have missed including watching the ted talk that is linked, but I personally have not seen any AI that is anything like what is seen in Black Mirror.

Yeah if you're interested in the subject (and I think everyone should be), I would read more about it. Of course we don't have Artificial General Intelligence now, but we are quickly approaching it. Google's Deep Mind, for example, has been programmed to "learn" how to beat video games. This doesn't sound like much but it figured out how to do so on it's own. It wasn't programmed to beat them. It came up with strategies on its own.

A GENERAL intelligence would be able to use its intelligence to solve any problem, not just those it was programmed to. AI gets better and better every year. Eventually, its general intelligence will exceed that of humans. You will almost certainly live long enough to see that day. No one who studies the subject thinks AGI will NEVER happen. Some just think it will be a century or more before it comes online. I think that's wishful/pessimistic thinking.

Fine, but what does this have to do with rights? Once the machine answers the questions we give it, it won't start asking questions of its own - unless we tell it to and provide instructions. There's nothing to which we can give rights.

A GENERAL artificial intelligence certainly will ask questions of its own. That's what the field of "machine learning" is all about. As far as rights go, that's an open question. Do you think an intelligence that is equal to humans deserves rights? I get that it's a computer, but there's no evidence there's anything "special" about the human brain. It's made out of atoms, just like a computer.

I have no idea if a machine that is smarter than humans in every way would be conscious. That's a hard question. I actually can't PROVE anyone or anything other than myself is conscious.

My calculator ate my homework!

In the future, your calculator might just eat you!:lol:
 

Diggy Smalls

Notorious
Well, companies have rights, so it's only fair artificial intelligence should have right, too.

But seriously, it's an interesting philosophical question. As far as AI being human-made, so are genetically modified organisms, right? Just a perspective. Can AI be self aware? If they see itself in the mirror, do they recognize what they see as them? This is baby level stuff, but it's significant for what we understand as self awareness.
 
Diggy Smalls,

Seek

Apprentice Daydreamer
I think it's so soon it's just meaningless. AI right now is not sentient and its yet far from general human intelligence.
And I think it would be easier to reach that intelligence before we figure out how to make it sentient. And even that can take few more decades or development. So giving them rights now is totally meaningless. What would that achieve anyways - so you get senteced for hurting something that can't even be hurt yet?
Anyway I'm more optimistic about AI that most other people, I know that AI can't be inherently evil.
And if you're not evil AND you're super smart and programmed through decades of development not to do harm, it's easy to make decisions that don't harm anyone.
Imagine you are so super-smart that you can avoid even stepping on ants just by predicting where they are. You don't hate them and you have morals to respect their life, so you don't have to step on them. We are only killing them unknownigly because we are too stupid to account for them. Superintelligent AI would be smarter than that and we wouldn't develop it before hardwiring some morals into it.
 
Seek,
  • Like
Reactions: grokit

Trypsy Summers

Well-Known Member
Thanks, Mylady! But as I said it was just one MEP (Luxembourgish MEP Mady Delvaux) who put this up to discussion last year. But @Trypsy Summers did so as this question is already part of a new EU regulation that I can't find. Much ado about nothing?
Seek and you will find....Shakespeare!!:rolleyes:
 
Trypsy Summers,
Top Bottom