Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology > AI
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
 
Old 08-06-2023, 07:26 PM
 
15,590 posts, read 15,702,343 times
Reputation: 22004

Advertisements

A bunch of aspects I hadn't considered in here, from science/tech writer Jon Gertner.




Wikipedia’s Moment of Truth
Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process?

The goal of Wikipedia, as its co-founder Jimmy Wales described it in 2004, was to create “a world in which every single person on the planet is given free access to the sum of all human knowledge.” The following year, Wales also stated, “We help the internet not suck.” Wikipedia now has versions in 334 languages and a total of more than 61 million articles. It consistently ranks among the world’s 10 most-visited websites yet is alone among that select group (whose usual leaders are Google, YouTube and Facebook) in eschewing the profit motive.
Despite its stodgy appearance, Wikipedia is more tech-savvy than casual users might assume.
“The ability to generate an answer has fundamentally shifted,” he says, noting that in a ChatGPT answer there is “literally no citation, and no grounding in the literature as to where that information came from.”
Almost certainly, that makes A.I. both more difficult to contend with and potentially more harmful, at least from Wikipedia’s perspective.
https://www.nytimes.com/2023/07/18/m...i-chatgpt.html
Reply With Quote Quick reply to this message

 
Old 08-08-2023, 11:26 AM
 
Location: Middle America
11,133 posts, read 7,195,916 times
Reputation: 17034
Quote:
Originally Posted by Cida View Post
The goal of Wikipedia, as its co-founder Jimmy Wales described it in 2004, was to create “a world in which every single person on the planet is given free access to the sum of all human knowledge.”
Well, that's laughable if he said that. Much of human knowledge is internal and not subject / possible to be put in print. Think of your life experiences and memories which are certainly 'knowledge' to you, and yet you couldn't put them into print and sufficiently convey the details. There will always be that realm of knowledge that the brain gets and knows, but won't / can't be shared with others.

Sounds like Jimmy has a bit of a God-complex as to his abilities.

Last edited by Thoreau424; 08-08-2023 at 11:42 AM..
Reply With Quote Quick reply to this message
 
Old 08-08-2023, 12:43 PM
 
3,230 posts, read 1,612,458 times
Reputation: 2890
The article makes a good point, in the sense that these large language models really have no sense of what is true or false.

They are statistical sentence completion savants, with no bearing on reality or what is true or false.
Reply With Quote Quick reply to this message
 
Old 08-10-2023, 05:53 AM
 
Location: Fortaleza, Northeast of Brazil
3,996 posts, read 6,815,032 times
Reputation: 2496
Quote:
Originally Posted by Ken_N View Post
The article makes a good point, in the sense that these large language models really have no sense of what is true or false.

They are statistical sentence completion savants, with no bearing on reality or what is true or false.

When ChatGPT don't know the answer to a question, it simply creates an answer with lots of false information.

Wikipedia explains what is AI hallucination:

https://en.wikipedia.org/wiki/Halluc..._intelligence)

ChatGPT is not humble enough to say: "Sorry, I don't know".
Reply With Quote Quick reply to this message
 
Old 08-10-2023, 07:01 AM
 
3,230 posts, read 1,612,458 times
Reputation: 2890
Yes it hallucinates a sequence of tokens, and we hallucinate meaning.
Reply With Quote Quick reply to this message
 
Old 08-10-2023, 12:16 PM
 
5,527 posts, read 3,263,616 times
Reputation: 7764
ChatGPT needs access to some sort of knowledge base to critique its responses before it prints them out. I'm thinking something like the master and emissary model of hemispheric brain function. Even if that model is inaccurate it might work. Neural networks are based on a flawed understanding of neuroscience but they still work.

Combine the master (knowledge base) with the emissary (ChatGPT) in some sort of GAN arrangement.
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology > AI
Similar Threads

All times are GMT -6.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top