Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Great Debates
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 04-25-2024, 03:18 PM
 
Location: Tricity, PL
61,924 posts, read 87,533,958 times
Reputation: 131963

Advertisements

Is the technology already going out of hand?

https://www.odditycentral.com/videos...echnology.html

This video went lately viral for showing how easy it is to use AI-powered deepfake technology to transform into virtually anyone online.
The technology makes it almost impossible to tell what is real on the internet.
Take a look at this video presented by a software engineer where he demonstrates how easy it is for him to pass as an attractive young girl. All he has to do is put on a wig, enable a piece of software and the AI takes care of the rest.
It is able to replace the man’s face with that of an attractive girl but otherwise mimics all of his actions, including eating with chopsticks, pinching his own face, and speaking with the appropriate mouth movement.
It's very impressive and opens the door to all kinds of manipulations on a grand scale, not only for entertainment purposes.

The AI technology is very impressive, but also terrifying, because it shows that AI is not contained to manipulation on still pictures anymore.

While we all know that many influencers use computer programs to enhance their body image
https://bigvu.tv/fr/blog/video-face-...the-top-5-apps
this AI technology takes it to a completely different level.
Reply With Quote Quick reply to this message

 
Old 04-26-2024, 06:46 AM
 
Location: Northern Virginia
6,843 posts, read 4,300,309 times
Reputation: 18736
Pretty much anything can be faked today to a pretty high standard. There's videos out there of Hitler singing Miley Cyrus songs that are pretty passable. While that's an obvious joke, an interested party could very easily create audio footage of you saying whatever they want you to be heard saying, a video of you doing what they want you to be seen doing. If your dad just died, someone who might want a piece of that estate could generate a recording of you admitting to having killed him. If you just had a bitter break-up and that person wants to get back at you, they could generate video of you performing grotesque sexual acts and post them online. While some of that may not withstand close expert scrutiny, it would be difficult for most people to detect the fake at face value. It's not even *especially* difficult.
Reply With Quote Quick reply to this message
 
Old 04-26-2024, 09:19 AM
 
Location: Taos NM
5,368 posts, read 5,168,366 times
Reputation: 6816
It certainly is eroding the ability for anything digital to be verifiable evidence. Likewise even if something is fake, it still leaves an impression mentally, even if someone knows it's not true, there's some psychological name for that. All of this is bound to have negative repercussions, and the guardrails aren't going to get it all.

There's certainly beneficial uses to AI, but the weirdest thing about it is the upsides in productivity and wellbeing aren't obviously apparent but the downsides are. Going from brochure to reality isn't as seamless as advertised for the benefits. That's why I really wouldn't bet too much on AI, a couple bad encounters with it personally and people get turned off. A couple large scale problems and it gets locked down from a regulation perspective. It's so easy for a negative event to transpire here. Investing wise it has literally taken over the us stock market with hype, but that's for another forum.

Thing is, it's out of the bag now. The models and codes are out there now, so we can't revert to 2019 here. What this means is that the digital sphere has in essence become it's own beast, quite detached from the physical sphere in reality.
Reply With Quote Quick reply to this message
 
Old 04-26-2024, 09:59 AM
 
Location: Northern Virginia
6,843 posts, read 4,300,309 times
Reputation: 18736
Quote:
Originally Posted by Phil P View Post
It certainly is eroding the ability for anything digital to be verifiable evidence. Likewise even if something is fake, it still leaves an impression mentally, even if someone knows it's not true, there's some psychological name for that. All of this is bound to have negative repercussions, and the guardrails aren't going to get it all.

There's certainly beneficial uses to AI, but the weirdest thing about it is the upsides in productivity and wellbeing aren't obviously apparent but the downsides are. Going from brochure to reality isn't as seamless as advertised for the benefits. That's why I really wouldn't bet too much on AI, a couple bad encounters with it personally and people get turned off. A couple large scale problems and it gets locked down from a regulation perspective. It's so easy for a negative event to transpire here. Investing wise it has literally taken over the us stock market with hype, but that's for another forum.

Thing is, it's out of the bag now. The models and codes are out there now, so we can't revert to 2019 here. What this means is that the digital sphere has in essence become it's own beast, quite detached from the physical sphere in reality.

Criminals will use AI for nefarious purposes and are already doing so. So in response, law enforcement will work with it, too. The problem with something like this being 'out of the bag' is that if your opposition is using it, you'll feel forced to do it, too. It's like with biological warfare. The U.S. government has been funding all sorts of nightmare-inducing stuff in that area for decades, which carries some amount of risk naturally, because the assumption was that the enemy is going to do the same and you don't want to be caught lacking.

I've already heard CEOs justify aggressive use of AI in those terms i.e. "Well if we don't do it for ethical reasons, then other companies using it will have a competitive edge on us, so we gotta press ahead regardless of any qualms."
Reply With Quote Quick reply to this message
 
Old 04-28-2024, 02:48 PM
 
Location: Northeastern US
20,104 posts, read 13,560,465 times
Reputation: 9995
I'm a software developer and wasn't born yesterday but my experience with AI hasn't been that impressive. I asked CoPilot within the Edge browser to create an image of the Cookie Monster from Sesame Street, but eating floppy disks instead of cookies. It made a image of the Cookie Monster eating a bunch of cookies and a couple of floppy disks. I repeated the command with INSTEAD OF capitalized. Slightly better result (fewer cookies, more floppies). Added a sentence, "There should be NO cookies in the image at all." Still couldn't comply.

Meanwhile I pay about $100 a year for AI assistance with my software development and I would say it is just barely worth what I pay for it. Most of the time it is clearly guessing. It has zero insight into the structure of the database I'm working with, etc. Once in awhile it gives me some almost-working code that uses an approach I hadn't thought of, that I end up adapting / rewriting. That's about it.

So that some guy could contrive a convincing deepfake where we have no idea how many iterations and how much tuning he went through to make it that good, and how many "tells" he edited out of the video, doesn't much concern me.
Reply With Quote Quick reply to this message
 
Old 04-28-2024, 04:53 PM
 
Location: moved
13,680 posts, read 9,765,062 times
Reputation: 23548
Quote:
Originally Posted by mordant View Post
I'm a software developer and wasn't born yesterday but my experience with AI hasn't been that impressive. ...
Not a computer-person myself, but my own experience is similar. All sorts of desktop applications now have an "AI assistant", which is supposed to do things like summarize a PDF text. To me it's only an annoyance. It's an annoying button that takes up screen space and obscures a radio-button which I normally click to save the document or to scroll down. Meanwhile, AI-generated text reads like a sophomoric (literally) regurgitation of an encyclopedia. The shallowness of its pablum is immediately obvious. I struggle to see how college professors might ever get spoofed, giving high marks to a paper "written" by students who invoke AI. I am reminded of one of those horses who "paint" by holding a paint-brush in their teeth, nodding their heads up-and-down adjacent to a strategically placed sheet of canvas. Though whimsical and therefore of some archival value as a curiosity, it can't be mistaken for "art".

So the real fear isn't that AI will be "transformative", but that it will result in crappy service, reduced quality, processes and products rife with errors, and general "encrapification" (canonical term is harsher, but would get censored) of even the most generic human efforts.
Reply With Quote Quick reply to this message
 
Old 04-29-2024, 04:07 PM
 
Location: Tricity, PL
61,924 posts, read 87,533,958 times
Reputation: 131963
Well, it actually happened

BBC wildlife presenter caught up in AI scam.
A well-known BBC wildlife presenter has been caught up in a scam that saw a fake AI-generated program mimic her voice and give permission for her face to be used in an advert.
Liz Bonnin and her management team noticed last week that the presenter of Our Changing Planet and Arctic From Above was fronting a poster for insect repellent spray, which neither her nor management had signed off.

It is an extremely worrying trend for everyone in the creative industries. Reputations are built on trust and we follow strict rules about endorsements to preserve our clients’ professional credibility...

https://deadline.com/2024/04/bbc-liz...nt-1235898282/
Reply With Quote Quick reply to this message
 
Old 04-29-2024, 09:02 PM
 
3,659 posts, read 1,617,678 times
Reputation: 5095
AI generated people is not perfected yet. I would think it possible to use AI itself to examine images/video to check for AI created content. Just like movies are required to display a content rating, require ads run through an AI detection program and display an AI notice/logo of some kind.
Reply With Quote Quick reply to this message
 
Old 04-30-2024, 07:18 AM
 
Location: Fortaleza, Northeast of Brazil
4,002 posts, read 6,831,369 times
Reputation: 2506
Quote:
Originally Posted by Phil P View Post
It certainly is eroding the ability for anything digital to be verifiable evidence.
By now experts still can audit a video evidence presented in court to certify if it's legit or a fake. But if technology keeps advancing, maybe it will become impossible in a near future to audit videos any more, and video will no longer be reliable evidence in court.
Reply With Quote Quick reply to this message
 
Old 04-30-2024, 11:47 AM
 
Location: Taos NM
5,368 posts, read 5,168,366 times
Reputation: 6816
Well this thread isn't encouraging so far... basically the malicious users have the time and effort and tools they need to use AI with enough efficacy to create fake or damaging content, while the good users of AI haven't had found enough satisfaction with their limited time and effort to generate useful content or information!

In my own experience, AI is useful for IT desktop support or simple questions summaries like caffeine content of tea, but I haven't found a use case for it for content generation or deeper questions. For content generation, I'm not doing anything that's rehashing information that's out there, I'm creating new things, like City Data posts or for my job, writing technical requirements (user stories) for what our software should be doing. AI doesn't do that. For deep questions, the answers don't often exist on the internet (like are fir trees replacing spruce in the Rockies) or the summary isn't what I want because I already knew that, I want the details.

I've got a co worker that uses it for emails sometimes, as English isn't his first language, but you can tell because his emails don't sound like what he sounds like in person!

I think that's the crux of the issue is that we have going on is that a lot of productive use cases aren't immediate whereas the malicious one are. A lot of productive use is generating new information, not rehashing existing stuff. But the malicious cases are obvious from rehashing existing material by creating fake material or jailbreaking information that was previously hid behind security on the web. So now we have to have a whole swath of people trying to keep up with the bad people and make new safeguards.

In general, the evangelists are 2x more optimistic on timelines or budgets for what AI will deliver compared to what actually happens, hence why farmers still aren't using AI powered crop sprayers. But the stock market and hype acts like they already are.
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Great Debates

All times are GMT -6.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top