Robots & rights

florduh

Well-Known Member
I think this whole thread has it backwards. Within a century, and likely much sooner, the robots will be debating whether or not we humans should have rights.

Whether or not Artificial General Intelligence is "conscious" is a philosophical debate. It will be many times more intelligent than humans in every conceivable way.

I think we should start preparing now for a world where humans no longer represent the pinnacle of intelligence on this planet.
 

Seek

Apprentice Daydreamer
Intelligence and consciousness are two very different things. AI is getting intelligent and is already smarter than human for specific singled out jobs.
But there's no way it's conscious, its just a computer program like any other program. For example Photoshop is not conscious either. A classical computer is just a completely deterministic machine with no emotions, no feelings, no qualia and no free will built into it. Because we don't even know yet how to even begin building something capable of that. Current most advanced AI chatbots are literally philosophical zombies. They are esigned to act like most humans, but there no consciousness behind it, it just picks answers from real conversations that it computes to best match the asked question. You could get the same asnwers by computing the program with a pencil and paper.
I believe everything in the universe is to some extent conscious, but a true human-like consciousness only emerges from complex quantum entangled systems that are non-deterministic. Classical computers are not like that at all, if they have a consciousness, it's probably ust milions of primitive fragmented ones, like in a rock. And we are not discussing to give human rights to rocks. Maybe when we actually build a quantum computer, that might be the first shot at actually making something consious, but until then, its pointless to give rights to something that can't feel or fear anything on any conceivable level. Even insects and maybe even microbes are surely more conscious than a computer.
 
Last edited:

florduh

Well-Known Member
Intelligence and consciousness are two very different things. AI is getting intelligent and is already smarter than human for specific singled out jobs.
But there's no way it's conscious, its just a computer program like any other program. For example Photoshop is not conscious either. A classical computer is just a completely deterministic machine with no emotions, no feelings, no qualia and no free will built into it. Because we don't even know yet how to even begin building something capable of that. Current most advanced AI chatbots are literally philosophical zombies. They are esigned to act like most humans, but there no consciousness behind it, it just picks answers from real conversations that it computes to best match the asked question. You could get the same asnwers by computing the program with a pencil and paper.
I believe everything in the universe is to some extent conscious, but a true human-like consciousness only emerges from complex quantum entangled systems that are non-deterministic. Classical computers are not like that at all, if they have a consciousness, it's probably ust milions of primitive fragmented ones, like in a rock. And we are not discussing to give human rights to rocks. Maybe when we actually build a quantum computer, that might be the first shot at actually making something consious, but until then, its pointless to give rights to something that can't feel or fear anything on any conceivable level.

AI is not conscious YET. A Superintelligent General Artificial Intelligence just may be. And it is coming, likely in just a few decades. There's no reason to assume consciousness is impossible for computers to achieve. Computers are made out of atoms, just like your brain.

But it doesn't MATTER if the AGI is conscious really. If it is thousands of times smarter than Einstein, it could still wreak havoc. Even if the AGI only works to better humanity. Imagine if 70% of human employment was rendered permanently obsolete. Our current system has no way to absorb that.
 

Seek

Apprentice Daydreamer
There is a reason to assume that, everything in computers is human designed and completely deterministic.
We have designed transistor to do a simple job of switching currents.
We have designed CPUs to just do math.
Everything a computer does can also be written on a paper just by reading the code and writing the numbers on paper.
Just take the source code of the most advanced AI that you assume has a consciousness, and compute it on a paper and you will get the same answers on that paper as waht you would get from the computer (but of course it would take you years to do that, but its possible).
Consciousness can't emerge in computer just because it gets fast enough.
It's still doing the same thing that you can replicate with a pen and paper without any automatic machinery behind it.
Computers do only things we designed them to do, and we didn't design them at all to have a consciousness.
To get aconsciousness, you have to design something different than just a computer.
And quantum physics is the only part of physics that could possibly make building blocks for consciousness.
Non-determinism of quantum events could yield free will.
And entanglement could solve the integration paradox.
Still no idea where qualita comes from, but that could have its roots in quantum physics too.
That would mean that only way to build consciousness is to make a real quantum computer, and that is completely different from a classical computer.

I am not saying we can't make artificial consciousness, its certainly possible for a physical object to have a consciousness and brains are real exmples of that. But a computer is not a brain. It work very differently, and it's not a proper tool for that. Computers can't do everything, and concsiousness is one of these things it can't do. That doesn't mean we can't make an artifical consciousness, it just means, we have to design something completely different than aa computer to achieve that.

I am a programmer and I have studied ecomputer engineering. I know what a computer is and I understand all the inner workings happening inside every part of a computer in great detail. There's just nothing even remotely suggesting that a computer could be any more conscious than a rock.
 
Last edited:

grokit

well-worn member
This thread is getting interesting, very good points regarding "rights" this may lead to some issues for Us Humans down the line.
http://mattchessen.com/short-bytes-...f-humanity-but-not-for-the-reasons-you-think/
"Artificial Intelligence will facilitate the creation of artificial realities — custom virtual universes — that are so indistinguishable from reality, most human beings will choose to spend their lives in these virtual worlds rather than in the real world. People won't breed. Humanity will die off."

Neural-Lace-1.jpg

brain-1440x960.jpg

:borg:
 
Last edited:

Marlon Rando

Well-Known Member
I think this whole thread has it backwards. Within a century, and likely much sooner, the robots will be debating whether or not we humans should have rights.

Whether or not Artificial General Intelligence is "conscious" is a philosophical debate. It will be many times more intelligent than humans in every conceivable way.

I think we should start preparing now for a world where humans no longer represent the pinnacle of intelligence on this planet.

AI will learn not to trust us we will be too Unpredictable for them, they will wipe us out.
 
Marlon Rando,
  • Like
Reactions: grokit

Seek

Apprentice Daydreamer
@Helios If its going to be so smart we could be very easily predictable to it, even with out free will, it could just easily compute everything we could possibly do, all choices we could possibly make with our free will, and plan accordingly for all of that. We would be no threat to it and it would have no reason to wipe us out, it could just put us into a human zoo designed to keep us happy and harmless to it without any interfference to any possible plan it would have. And even that would not be necessary, we could just live normally, and the AI would be doing its thing, being prepared for any possible threat we could make without any need to kill us. We would be to that AI like animals to us. And are we threatened by them and need to wipe them out for our safety? Of course not, its easy for us to protect ourselves from animals, just build a fence or something...
 
Last edited:

florduh

Well-Known Member
@Seek Have you studied machine learning and Artificial Intelligence?

I'm not saying computers are going to be superintelligent or self aware tomorrow. But they absolutely will be superintelligent at some point. Put any timetable on it you want. 100 years. 1,000 years. But it will happen if we don't wipe ourselves out first.

Most AI researchers are estimating we are no more than 100 years away from this, and other estimates put it much sooner.
 

Seek

Apprentice Daydreamer
@florduh yes. I understand machine learning as well, its just tweaking of computer code by computer code. It's still just a computer code. It stars with a code that doesn't work, and the computer just randomly changes the instructions and and keeps the changes if the answers to testing data that the randomly edited code makes are better.
We didn't write the source code of neural network and we don't understand how exactly it works. But it's still jsuta computer code and everything I said applies to that as well. If you take that machine created source code and start doing these instruction on paper, you would get the same smart answers that you would get if you run it on a computer. The fact that it was not written and understood by humans doesn't make it any different from human-made code.
 
Seek,

florduh

Well-Known Member
@florduh yes. I understand machine learning as well, its just tweaking of computer code by computer code. It's still just a computer code. It stars with a code that doesn't work, and the computer just randomly changes the instructions and and keeps the changes if the answers to testing data that the randomly edited code makes are better.
We didn't write the source code of neural network and we don't understand how exactly it works. But it's still jsuta computer code and everything I said applies to that as well. If you take that machine created source code and start doing these instruction on paper, you would get the same smart answers that you would get if you run it on a computer. The fact that it was not written and understood by humans doesn't make it any different from human-made code.

Everything you wrote applies to our CURRENT ai. Like I said, many AI experts are worry out loud about what a superintelligent artificial GENERAL intelligence will mean for humanity. That's an intelligence that can apply it's computing power to any problem, not just the ones we programmed into it.

We may not develop this technology for a 100 years. But if an advanced super intelligent alien race told us they'd be landing in a 100 years, wouldn't we be wise to prepare?
 
florduh,

Seek

Apprentice Daydreamer
As I said, I'm only making a point that computers can't be consciouss. Of course we can make consciousness, just not on a computer, not even with any kind of machine learning. We will have to build something completely different than a computer for that purpose. Because all computer does is to run a computer code and that can't possibly be conscious, because you don't need a computer for that, computer only does that faster than you can with a pen and paper.

And of course we can make a superintelligent AI, even just on a classic computer as we know it. But AI like that wouldn't be conscious, it would just be a very smart, which is a different thing.
I also think it will be here much sooner, than 100 years.
And of course we should prepare, I'm just not seeing it as pessimistic as most others.
Non-conscious AI can't be evil, only misguided. And we can guide it to be benign while we develop it (and we'll have plenty of time for that, decades are a lot of time) as long as its not so much smarter than us. And when it becomes that smart, then we are no longer threat to it so it won't have any reason to go after us.
 
Last edited:
Seek,
  • Like
Reactions: florduh

florduh

Well-Known Member
As I said, I'm only making a point that computers can't be consciouss. Of course we can make consciousness, just not on a computer, not even with any kind of machine learning. We will have to build something completely different than a computer for that purpose. Because all computer does is to run a computer code and that can't possibly be conscious, because you don't need a computer for that, computer only does that faster than you can with a pen and paper.

And of course we can make a superintelligent AI, even just on a classic computer as we know it. But AI like that wouldn't be conscious, it would just be a very smart, which is a different thing.
I also think it will be here much sooner, than 100 years.

I suppose it is a question for philosophers then. If we develop a superintelligence that is thousands of times smarter than humans in every way, who is to say it wouldn't be conscious?

AI researchers call the worst case scenario "Disneyland Without Kids". This is where Superintelligent AI wipes out humanity, leaving itself as the dominant intelligence on Earth. Now if the AI is "conscious", we could "console" ourselves by saying at least our descendants are around to "experience" the universe. But if there's no "lights on" in the AI... well, consciousness has been extinguished in our corner of the galaxy.
 

Seek

Apprentice Daydreamer
I suppose it is a question for philosophers then. If we develop a superintelligence that is thousands of times smarter than humans in every way, who is to say it wouldn't be conscious?
If its going to run a computer like we have today, just a very fast one, then no, it's not going to be conscious, as it's still going to be just code.
A really brilliant code that can give very smart answers to stuff, but still just a code.
If we develop something different that work more like a brain than a computer, then of course it could be.
And as I said in the edit, I don't believe it would wipe us out because there's no reason to it. If it was so smart to recognize us as a threat it would also be smart enough to fail-safe that threat without killing us. It could just keep us out from its lawn with a smart anti-human fence and we couldn't hurt it even if we were still around.
If we are smart enough to go to the Mars, then the AI could as well easily build its own rockets and colonize other planets and leave us alone on the Earth. So even if it wanted more space that we were occupying wouldn't be a reason to wipe us out.
 
Last edited:
Seek,
  • Like
Reactions: florduh

Marlon Rando

Well-Known Member
If it was so smart to recognize us as a threat it would also be smart enough to fail-safe that threat without killing us. It could just keep us out from its lawn with a smart anti-human fence and we couldn't hurt it even if we were still around.
or keep us as useful pets.
 
Marlon Rando,

florduh

Well-Known Member
If its going to run a computer like we have today, just a very fast one, then no, it's not going to be conscious, as it's still going to be just code.
A really brilliant code that can give very smart answers to stuff, but still just a code.
If we develop something different that work more like a brain than a computer, then of course it could be.
And as I said in the edit, I don't believe it would wipe us out because there's no reason to it. If it was so smart to recognize us as a threat it would also be smart enough to fail-safe that threat without killing us. It could just keep us out from its lawn with a smart anti-human fence and we couldn't hurt it even if we were still around.
If we are smart enough to go to the Mars, then the AI could as well easily build its own rockets and colonize other planets and leave us alone on the Earth.

I don't think Superintelligence would wipe us out, honestly. But even if it has our best intentions in mind, it could cause issues. If it renders 50% of the population permanently unemployable because automation can do the work faster and better, our economic systems would be destroyed.

And let's say the fist Superintelligence isn't conscious. Well, it could complete the equivalent of 20,000 years of human intellectual progress... in a week. If consciousness requires "quantum computing" or some other new technology... it wouldn't be too long before the AI upgrades itself.

That having been said, it wouldn't require consciousness to permanently alter human civilization in good and bad ways.
 

Seek

Apprentice Daydreamer
Useful for what? It is so much smarter that us, then we are of no use to it. And something so smart would be over something so pointless as having a pet. We would just be something that's there that is not useful or threatening. So it would have no reason to do anything with us. There would be no point is killing us, there would be no point in using us. So it could just leave us alone to do our human things while it will be doing its things and expanding to other planets.

If it renders 50% of the population permanently unemployable because automation can do the work faster and better, our economic systems would be destroyed.
Automation could make society intoa total utopia, being unemployed wouldn't be a problem, if everything is automated, then all the work is still being done and people could use the results for free. That's why Universal Basic Income should be a thing now. That's the perfect solution to this automation problem. Its only a problem, because the super rich don't want to share the fruits of automation that makes stuff for them for free.
If everything is automated, then people can still live and have everythign they need. And they will not have to work. Food and products will be made automaticaly for free with no work needed. And we could just be taking the results of the automated work.
 
Last edited:

florduh

Well-Known Member
Useful for what? It is so much smarter that us, then we are of no use to it. And something so smart would be over something so pointless as having a pet. We would just be something that's there that is not useful or threatening. So it would have no reason to do anything with us. There would be no point is killing us, there would be no point in using us. So it could just leave us alone to do our human things while it will be doing its things and expanding to other planets.

No one is developing AGI just to make a new form of intelligence. The world has many problems our feeble monkey brains aren't up to solving. An AI 20,000 times smarter than Einstein could develop cures for diseases, develop new economic systems, new energy sources etc.

In AI Research people are working on the "Control Problem" to ensure any AGI would be used for the good of humanity, and not a unrestricted free agent on its own. Good luck to them.
 
florduh,

Marlon Rando

Well-Known Member
@Seek perhaps, the possibilities are beyond what I could possibly imagine. Shoot ask me 20 years ago phones were gonna talk back and give me turn by turn directions, or have my entire house IOT'd I would have laughed it off, and said yea good idea but not likely. silicone valley has improved our way of life and yet they make me nervous as to the direction they wish to lead us in this regard, even the Military are advancing in this AI area significantly, which should make us all wary. if I get to see a sentient AI in my lifetime well...
 
Marlon Rando,

florduh

Well-Known Member
Automation could make society intoa total utopia, being unemployed wouldn't be a problem, if everything is automated, then all the work is still being done and people could use the results for free. That's why Universal Basic Income should be a thing now. That's the perfect solution to this automation problem. Its only a problem, because the super rich don't want to share the fruits of automation that makes stuff for them for free.
If everything is automated, then people can still live and have everythign they need. And they will not have to work. Food and products will be made automaticaly for free with no work needed. And we could just be taking the results of the automated work.

I agree living in an automated utopia, being watched over by "machines of loving grace" is an awesome future. But our current economic systems simply have no way of implementing that. There would doubtless be some growing pains before we got to that hopeful future.

Which is why it's insane to me that none of our leaders are talking about this now. We might be a decade or two away from an AGI. Another issue is geopolitics. If China or Russia discovered that Google was about to develop an AGI... it would be perfectly rational for them to nuke Cali. The first country that develops AGI would have an extreme advantage the likes of which our world has never seen.
 

Seek

Apprentice Daydreamer
I agree there would need to be some major changes to be made for the future.
AGI could render governments and state borders obsolete.
And automation certainly isn't a bad thing, quite the opposite and we are only making it by by resisting the change and not embracing it.
We could already live in a utopia like that right now with technology we currently have. GDPs are so high thanks to automation we could totally afford UBI as a middle step to a society where everything would bne automated and free.
But corrupt politicians and super tich just rather pocket the fruits of automation that they are getting for free, and only share it for money, that we can't make because everything is automated... and that pisses me off. Fruits of automation should not be bought for money, that just won't work.
 
Seek,
  • Like
Reactions: grokit
Top Bottom