• Do NOT click on any vaporpedia.com links. The domain has been compromised and will attempt to infect your system. See https://fuckcombustion.com/threads/warning-vaporpedia-com-has-been-compromised.54960/.

Artificial Intelligence....

lwien

Well-Known Member
First off, for those of you who are Jon Stewart fans, you HAVE to tune into "Last Week Tonight with John Oliver". Simply brilliant.

Anyway, he had Stephen Hawking on as a guest. The subject of artificial intelligence came up. Hawkings said, "Artificial intelligence could be a real danger in the not-too-distant future. It could design improvements to itself and out-smart us all."

John Oliver made some funny comments and then Hawkings said this: "There is a story that scientists built an intelligent computer. The first question they asked it was, 'Is there a God?'. The computer replied, 'There is now!' " .............. To which Oliver exclaimed, "HOLY SHIT!"

Great show. It's on HBO.

Edit: To the mods. If this is post is more appropriate in the "TV Shows" thread, please feel free to move it.
 
Last edited:

lwien

Well-Known Member
Hawking is the man! That's both hilarious and scary to think about. Personally I have seen the terminator movies enough to be VERY frightened of skynet.:ninja:

Ya know, even though he is completely paralyzed, he can still manage a faint smile with a twinkle in his eyes and he does it at the most opportune times. He's got a great sense of humor.

I agree. He's the man!!
 

syrupy

Authorized Buyer
I'm not so sure about the SkyNet, god-complex type of AI that seeks to control the universe and save man from himself. What concerns me is that any functional AI would be extremely valuable, and would need to have a number of safeguards and self-protection systems. The minute an AI mis-identifies or over-estimates a threat, then there will be big trouble.

wargames.jpg
 

lwien

Well-Known Member
So when do ya think we'll have computers smart enough to write their own code to improve themselves and then improve themselves again, then again, then..................
 
lwien,

crawdad

floatin
once you achieve consciousness then you can do more than alter your code, you can invent it or even do without it. however in order for us to program that we might need to understand it better ourselves first, i dont think a cyclic altering/improving on existing code is a basis for establishing something new such as consciousness..only better algorithms. ive yet to see a computer do anything other than what its been told, including errors. if only the code were based on/in organic structures, somehow.
 
crawdad,

tuk

Well-Known Member
So when do ya think we'll have computers smart enough to write their own code to improve themselves and then improve themselves again, then again, then..................
I think a better question might be: when do we think humans will be smart enough to code programs that can learn & self-update in response to an input/event unforeseen by the originating human coder.

In order to understand how many light years we are away from this scenario we must first understand that it took 4.5 billion years of evolutionary development to make human intelligence possible........take some time to ponder all that has occurred between the magic pools of primordial soup & the ability to dispatch a telepathic metal eye to wander the surface of Mars.

In order to somehow by-pass this 4.5 billion year development overhead*, we would need to completely understand all the mechanisms of the human brain in order to replicate these mechanisms in code..I cant remember who said it, but it was something along the lines of 'humans will understand the whole universe before they will understand the human brain'....Roger Penrose is closest I can find:

"Consider the human brain," says physicist Sir Roger Penrose. "If you look at the entire physical cosmos, our brains are a tiny, tiny part of it. But they're the most perfectly organized part. Compared to the complexity of a brain, a galaxy is just an inert lump."

Personally I don't think there is enough planet left to trash.....so humans will never get smart enough( in the remaining time left ) to write authentic AI code.

I didn't see the program but it doesn't sound like I agree with Stevo!

*at least 4.5 billion years....& only if we can write meaningful code as quickly as Mother did!
 
tuk,

lwien

Well-Known Member
I didn't see the program but it doesn't sound like I agree with Stevo!

Hmmmm.......

Hawkings.............tuk......
tuk.................Hawkings.......

With all due respect, I think I'll go with Hawkings. ;)
 
lwien,

tuk

Well-Known Member
Hmmmm.......

Hawkings.............tuk......
tuk.................Hawkings.......

With all due respect, I think I'll go with Hawkings. ;)

Fame based decision < content based decision = ;)

You heard what Hawkins had to say on the subject ...how would he answer my previous points?
 
tuk,

syrupy

Authorized Buyer
Who said AI has anything to do with how the brain physically works? CPUs aren't based on human brains, why should AI be?

Obviously the 4.5 billion years of humans inveting fire and lighting their farts is dwarfed by the advances in computing in just the past 20 years. I would expect a semi-functional AI-driven drone in most of our lifetimes.
 
syrupy,
  • Like
Reactions: RUDE BOY

tuk

Well-Known Member
Who said AI has anything to do with how the brain physically works? CPUs aren't based on human brains, why should AI be?

Not only is intelligence a fundamental product of the Brain, it can only be produced by a brain....thats why!

CPUs aren't based on human brains
A CPU is only a single element of a computer....but computers are designed to process information from the human brain, computers try to emulate the Brain as much as possible, but can only manage a very primitive, predictable imitation.

Obviously the 4.5 billion years of humans inveting fire and lighting their farts is dwarfed by the advances in computing in just the past 20 years.
I think some night classes might be in order.
 
Last edited:
tuk,

syrupy

Authorized Buyer
Intelligence is a fundamental product of the Brain ....thats why!

I don't necessarily agree with that assumption. I'd have to hear how you define intelligence. To me the brain is a mechanism, but it doesn't imply intelligence. I see proof of this on the freeways daily.

I should jump off this topic before it turns into a discussion about consciousness and cryogenics.

Futurama-richard-nixon-3682370-706-540.jpg
 
syrupy,
  • Like
Reactions: RUDE BOY

tuk

Well-Known Member
I don't know shit about this stuff which is why I defer to those that do.
That's a dangerous road to go down, I'm sure Hawkins knows his stuff about dark matter etc but he knows shit about computers if he thinks authentic AI is just around the corner....Hawkins does talk shit from time to time, not so long ago he was saying the answer to the environmental issues was to find another planet....
 

tuk

Well-Known Member
How so?

Intelligence has been defined in many different ways such as in terms of one's capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving.
 
Last edited:
tuk,

syrupy

Authorized Buyer

A tree knows which direction to grow, how to find water beneath it. It knows how to withdraw its energy in winter and grow in the spring. How many humans know how to survive from birth?

@lwien the AI I envision isn't some all-knowing and controlling being. It will be very tightly controlled at first, on a small level. The last thing the governments of the world want is something running around changing all the power structures.

Can you imagine what an AI system could do taking control of the financial markets?
 
Last edited:
syrupy,

syrupy

Authorized Buyer
How so?

Intelligence has been defined in many different ways such as in terms of one's capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving.

Sorry I must have missed the italics in the edit. There is a logical issue I have with those definitions of intelligence. Let me explain by analogy.

We have this phenomena were are studying, looking for intelligence. Let's call it LifeForm #1. It might be a human, an animal, a single-celled creature. Assuming we can communicate with LifeForm #1, do we do either of the following:?

1. test and observe the being, applying standards and definitions for intelligence as posted above?
2. ask the being what it thinks intelligence is, and form conclusion from that?

I think we can agree that method #1 is superior, and that asking a test subject to define its own success is unhelpful. But that's just what we've done here is #2. We, humans as the test subjects, have defined the criteria of the experiment. If we let the tree define success, we get ability to grow, find water, bear fruit. No logic valued or needed for trees. That's a human trait. Ultimately I find this human-centric definition of success to be the same kind of thinking that's led humans to conclude we're masters of this world.

Maybe we need a more inclusive definition of intelligence, that includes people, animals, and AI forms. Maybe AI should tell humans what success should be for humans.

@lwien , wouldn't it be a trip, if that bad scary AI came in and fixed everything? I don't think people really want that...
 
Last edited:
syrupy,
Top Bottom