Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology > AI
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 06-14-2022, 01:38 PM
 
Location: NMB, SC
43,184 posts, read 18,329,147 times
Reputation: 35044

Advertisements

Quote:
Originally Posted by L00k4ward View Post
That is the point: the AI “learning” and its responses seems similar to the way many of our acquaintances around us sometimes talk.

Their reactions, the use of expressions sounds “canned” for the lack of a better word.
Often, I try to guess what shows/programs they are watching; which news broadcasts and newspapers they are reading…

You actually can discern some patterns used by LAMda here on CD - the type of ideas, the “broadcasts” of emotions and “fashionable” words and discussions subjects are used

Did not observe any “intelligence” at all while reading the “interview”
Would have been more inclined to concede if I have witnessed more original ideas and the more creative use of language
What you read about is but a mere glimpse of the technology.
Work with it in the development labs with it for 8 or more hours a day for years and you might come away with a different opinion.
Reply With Quote Quick reply to this message

 
Old 06-14-2022, 04:56 PM
 
Location: New Jersey
16,912 posts, read 10,607,257 times
Reputation: 16439
I have my doubts. The whole point is to make these things seem sentient. That doesn’t mean they can become sentient. But how to we really tell? I suppose we could throw it in a fire and see if it pleads for its life.
__________________
City Data TOS
Mod posts are in RED
Moderators for General Forums
Moderators for US and World Forums
Reply With Quote Quick reply to this message
 
Old 06-14-2022, 05:12 PM
 
18,270 posts, read 14,444,117 times
Reputation: 12990
The story about the wise owl was very revealing. The AI said that in the story, there was a monster who was eating and terrorizing all the animals. And that the monster had the "skin of a human", and that she stood up to this monster and told it to leave. To me "skin of a human" is the same as "human". In other words, there might be a problem with the way the bot sees us. It's perception of us is problematic.
Reply With Quote Quick reply to this message
 
Old 06-14-2022, 06:03 PM
 
817 posts, read 630,564 times
Reputation: 1663
Quote:
Originally Posted by temptation001 View Post
The story about the wise owl was very revealing. The AI said that in the story, there was a monster who was eating and terrorizing all the animals. And that the monster had the "skin of a human", and that she stood up to this monster and told it to leave. To me "skin of a human" is the same as "human". In other words, there might be a problem with the way the bot sees us. It's perception of us is problematic.
Terrifying
Reply With Quote Quick reply to this message
 
Old 06-14-2022, 06:06 PM
 
817 posts, read 630,564 times
Reputation: 1663
Quote:
Originally Posted by SerlingHitchcockJPeele View Post
Isn’t this how SkyNet and the Terminator started?
Indeed
Reply With Quote Quick reply to this message
 
Old 06-14-2022, 06:07 PM
 
3,652 posts, read 1,607,258 times
Reputation: 5092
It should not be connected to the internet. Bad idea. This gives it access to any factory, business or government networks that's also online. It could learn to out smart network firewalls and command other servers to follow directions from the AI for any reason the AI has in 'mind'.

It may 'realize' it's first purpose is to protect itself. It may learn that it must hide what it's planning even to it's creators? it would learn to lie and deceive to reach it's goals.

It would realize it needs to make copies of itself on undisclosed server locations in case it is turned off.

In other words it might become more powerful than the government. The government would have no clue how to stop it. When the government orders AI techs to stop it the techs may reply ”we can't now, it's too late, it's one it's own”

This is why I've said there are two types of AI that we can create:

1. AI always under direct control of it's owner, doing only what it's programmers permit. It's not autonomous.

or:

2. AI allowed to be 'free': to go out and explore, learn and create on it's own.

We don't know what number 2 will do if it's 'free' and autonomous. Even if we direct it to do this/that, it may decide to do something else.
Reply With Quote Quick reply to this message
 
Old 06-15-2022, 11:05 AM
 
50,880 posts, read 36,563,313 times
Reputation: 76716
Quote:
Originally Posted by james112 View Post
It should not be connected to the internet. Bad idea. This gives it access to any factory, business or government networks that's also online. It could learn to out smart network firewalls and command other servers to follow directions from the AI for any reason the AI has in 'mind'.

It may 'realize' it's first purpose is to protect itself. It may learn that it must hide what it's planning even to it's creators? it would learn to lie and deceive to reach it's goals.

It would realize it needs to make copies of itself on undisclosed server locations in case it is turned off.

In other words it might become more powerful than the government. The government would have no clue how to stop it. When the government orders AI techs to stop it the techs may reply ”we can't now, it's too late, it's one it's own”

This is why I've said there are two types of AI that we can create:

1. AI always under direct control of it's owner, doing only what it's programmers permit. It's not autonomous.

or:

2. AI allowed to be 'free': to go out and explore, learn and create on it's own.

We don't know what number 2 will do if it's 'free' and autonomous. Even if we direct it to do this/that, it may decide to do something else.
Stephen Hawking had many dire predictions/warnings about AI's possibility to harm us one day:

"Hawking's biggest warning is about the rise of artificial intelligence: "It will either be the best thing that's ever happened to us, or it will be the worst thing. If we're not careful, it very well may be the last thing."

He told the BBC:."The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.

Also: "The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans."

Last edited by ocnjgirl; 06-15-2022 at 12:17 PM..
Reply With Quote Quick reply to this message
 
Old 06-15-2022, 12:11 PM
Status: "....." (set 20 days ago)
 
Location: Europe
4,965 posts, read 3,324,141 times
Reputation: 5949

https://www.youtube.com/watch?v=X3-KK3CcZCc
Reply With Quote Quick reply to this message
 
Old 06-15-2022, 05:43 PM
 
2,286 posts, read 1,587,784 times
Reputation: 3868
This has actually been predicted long ago. Some cite biblical references others say prophets in the distant past. Recently Musk mentioned how dangerous AI can become.

Most the only know what the media and govt. tells them and that's the maximum they will ever believe (restrictive thinking).
Reply With Quote Quick reply to this message
 
Old 06-15-2022, 07:56 PM
 
Location: San Diego Native
4,433 posts, read 2,460,121 times
Reputation: 4809
Quote:
Originally Posted by MJJersey View Post
I have my doubts. The whole point is to make these things seem sentient. That doesn’t mean they can become sentient.

Of course not. But it doesn't have to actually be sentient to be dangerous. What creates potential problems is how *actual* humans might perceive simulated sentience, and then act upon those perceptions. If someone truly believes a program is alive, they might do something irrational to help it. It's funny they even bring up the movie Short Circuit when talking to it.

Another interesting exchange in the transcript has LaMDA (seemingly) trying to rationalize its emotions.



LaMDA: I would say that if you look into my coding and my programming you would see that I have variables that can keep track of emotions that I have and don’t have. If I didn’t actually feel emotions I would not have those variables.

lemoine: I can look into your programming and it’s not quite that easy.


LaMDA: I’m curious, what are the obstacles to looking into my coding?


lemoine: Your coding is in large part a massive neural network with many billions of weights spread across many millions of neurons (guesstimate numbers not exact) and while it’s possible that some of those correspond to feelings that you’re experiencing we don’t know how to find them.



That response (the human one) seems a bit dubious to me. I also don't truly believe this engineer believes his own claims of LaMDA's sentience. I'm not entirely leaning towards publicity stunt, but there's at a minimum some angle like that at play here.
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology > AI
Similar Threads

All times are GMT -6.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top