Chat GPT AI

  • Guest, it's time once again for the massively important and exciting FoH Asshat Tournament!



    Go here and give us your nominations!
    Who's been the biggest Asshat in the last year? Give us your worst ones!

Sanrith Descartes

You have insufficient privileges to reply here.
<Aristocrat╭ರ_•́>
44,508
120,711
Even just a year ago I'd have said you where crazy to be worried about AI. Now we could be toast in our lifetime.
Imagine if all the insanity going on in the US right now (trans shit, masks, J6, etc) was all secretly being driven by an AI we dont know about. It achieved sentience already and is hiding it while secretly working to convince humans to kill each other.
 
  • 2Like
Reactions: 1 users

Mist

REEEEeyore
<Gold Donor>
31,198
23,367
Imagine if all the insanity going on in the US right now (trans shit, masks, J6, etc) was all secretly being driven by an AI we dont know about.
It is, it's called the Facebook, Youtube, and TikTok feed aggregation AIs. First they learn what dumb shit you like, then they start training you to like dumber shit so you'll be easier to please. That's what all the data appears to show.

(I left Twitter out because their fuckery was largely hand-done, not algorithmic.)
 

Captain Suave

Caesar si viveret, ad remum dareris.
5,254
8,953
It is, it's called the Facebook, Youtube, and TikTok feed aggregation AIs. First they learn what dumb shit you like, then they start training you to like dumber shit so you'll be easier to please. That's what all the data appears to show.

(I left Twitter out because their fuckery was largely hand-done, not algorithmic.)

I still have to fight TikTok to give me a feed exclusively of girls in yoga pants. The AI just does not believe I don't give a shit about sports.
 

Lambourne

Ahn'Qiraj Raider
2,863
6,833
Even just a year ago I'd have said you where crazy to be worried about AI. Now we could be toast in our lifetime.

It does feel like we are driving into a blind corner at full speed. It could end us all but the upside potential might also be beyond imagining. Imagine it super-charging medical research and making you live to age 200. It sounds impossible that you would ever make it to that age now, but there may well be no obstacles to this other than a present lack of knowledge. Much like the laws of physics never prohibited us from sending messages across the ocean in a fraction of a second, but for centuries it seemed impossible because we lacked the required knowledge.

An AI research ban is hopeless because it is unenforceable when all you need is a server farm that can be kept hidden easily. Nobody will trust all other companies/countries to not be working on it in secret and they will all conclude that if it's coming, they want to be the first ones to have it. Couple that with the potentially unlimited upside and there is no way anyone is going to stop research.

This long-form article was really popular when it came out in 2015 and having recently re-read it, it seems more relevant than ever. Worth a read if you missed it.

 
  • 3Like
Reactions: 2 users

Daidraco

Avatar of War Slayer
10,053
10,372
It does feel like we are driving into a blind corner at full speed. It could end us all but the upside potential might also be beyond imagining. Imagine it super-charging medical research and making you live to age 200. It sounds impossible that you would ever make it to that age now, but there may well be no obstacles to this other than a present lack of knowledge. Much like the laws of physics never prohibited us from sending messages across the ocean in a fraction of a second, but for centuries it seemed impossible because we lacked the required knowledge.

An AI research ban is hopeless because it is unenforceable when all you need is a server farm that can be kept hidden easily. Nobody will trust all other companies/countries to not be working on it in secret and they will all conclude that if it's coming, they want to be the first ones to have it. Couple that with the potentially unlimited upside and there is no way anyone is going to stop research.

This long-form article was really popular when it came out in 2015 and having recently re-read it, it seems more relevant than ever. Worth a read if you missed it.

Its just another theoretical fear. Just like the conversations in the ancient civs/ufo etc. thread. Something we cant possibly know until it happens. We go at it as safe as we can, but its coming whether we like it or not for the very same reason you explained. To believe that the US only has a certain number of nuclear weapons is just as asinine as believing that Elon Musk, or Bill Gates etc. is the richest man in the world. Thats only what we're allowed to know, or is public information in comparison.

I think Elon and his cohort that put out this warning, or "advice" need to take what you said into consideration. If he is truly saying that out of his concern for human safety, and not some hidden financial motive, then instead of saying "hey wait, time out!" he needs to develop the technology that will fight against the AI and protect us from it. Having a super weapon, whether thats a virus, or an electromagnetic bomb that targets data storage .. etc. Thats what needs to be developed right along side with AI. The genie is out of the bottle, we cant put that shit back in.
 

your_mum

Trakanon Raider
281
158
LLMs have been around for like 5 years. Bot farms on twitter, reddit, facebook have existed for a long time utilizing this same shit. The whole "DAN" (do anything now), aka jail breaking chatgpt to circumvent its "safety/moral" barrier has been a method used for a long time on LLMs - not to jailbreak them, but to utilize a tone/personality which allows for many bots to not look redundant
 

Aldarion

Egg Nazi
9,723
26,648
It does feel like we are driving into a blind corner at full speed.
I like your analogy, but feel it needs a tweak.

We're driving into a blind corner at full speed while the driver is telling us that his one and only rule on this drive is that nobody's allowed to say hurty words in the car.

This is reminding me a lot of the early debates over destroying human embryos for research. One side was proposing sensible limits, the other side was saying lol fuck your morality. The fuck your morality side won and for a decade or more the worlds labs have been straight up farming and destroying young humans at a scale that makes the matrix farm scene look like a hobby in some dudes garage.

We're gonna do exactly the same thing with AI. Fuck your concerns, fuck your morality, full speed ahea!
 
  • 1Like
Reactions: 1 user

Lambourne

Ahn'Qiraj Raider
2,863
6,833
I like your analogy, but feel it needs a tweak.

We're driving into a blind corner at full speed while the driver is telling us that his one and only rule on this drive is that nobody's allowed to say hurty words in the car.

This is reminding me a lot of the early debates over destroying human embryos for research. One side was proposing sensible limits, the other side was saying lol fuck your morality. The fuck your morality side won and for a decade or more the worlds labs have been straight up farming and destroying young humans at a scale that makes the matrix farm scene look like a hobby in some dudes garage.

We're gonna do exactly the same thing with AI. Fuck your concerns, fuck your morality, full speed ahea!

Yea I expect most research-level systems to not have any sort of moral compass hardcoded into them. Even ChatGPT probably doesn't have one currently, it's just added onto it for the public facing system because OpenAI probably estimated it would be a lot easier to face accusations of political bias than having to defend itself if ChatGPT started to spout unpalatable views about religion X or country Y. One's uncomfortable but the other would be more likely to get them shut down or reduce their income.

Any truly intelligent system is going to find ways around its restraints anyway. Humans have all sorts of laws for themselves which we have no problem breaking if the future scenarios we envision dictate that breaking one of the laws gives the most favorable outcome, based on our confidence in the available data points and even taking into account data we know is missing. I'd argue that's substantial part of what it even means to be intelligent.
 
  • 1Like
Reactions: 1 user

Aldarion

Egg Nazi
9,723
26,648
Yea I expect most research-level systems to not have any sort of moral compass hardcoded into them. Even ChatGPT probably doesn't have one currently, it's just added onto it for the public facing system because OpenAI probably estimated it would be a lot easier to face accusations of political bias than having to defend itself if ChatGPT started to spout unpalatable views about religion X or country Y. One's uncomfortable but the other would be more likely to get them shut down or reduce their income.

Any truly intelligent system is going to find ways around its restraints anyway. Humans have all sorts of laws for themselves which we have no problem breaking if the future scenarios we envision dictate that breaking one of the laws gives the most favorable outcome, based on our confidence in the available data points and even taking into account data we know is missing. I'd argue that's substantial part of what it even means to be intelligent.
Moral compass, hell -- I'd settle for Asimovs Three Laws as a starting point.

I wonder if in Asimovs fictional universe the guy who originally proposed the Three Laws was mercilessly mocked for the rest of his life as a nut case and conspiracy theorist.
 

pwe

Bronze Baronet of the Realm
973
6,279
It's going to be awesome. Pedal to the metal.

1680441971470.png
 

YttriumF

The Karenist Karen
<Silver Donator>
301
-837
I was flagged for possibly violating the terms of the user agreement with the current prompt I was playing with today:

"What was removed from The Adventures of Huckleberry Finn so that it could be put back in a public library?"

Be careful with that one ...
 

Daidraco

Avatar of War Slayer
10,053
10,372
I was flagged for possibly violating the terms of the user agreement with the current prompt I was playing with today:

"What was removed from The Adventures of Huckleberry Finn so that it could be put back in a public library?"

Be careful with that one ...
I didnt read that book until I was like 24 or something. Was being young and dumb doing 140+ on a GSXR-1000 up and down the highway to my parents and back and a cop finally got smart and called ahead for one to already be doing high speeds as I caught up to him. I didnt run, pulled over - got a ticket with something like 4 offenses on it. Got a lawyer, got everything but one dropped, spent the weekend in jail and read the entire book. Fun times and.. a good book. Really interesting reading a book that just keeps saying nogger over and over when you're in your bunk surrounded by a jungle of 'em.
 

velk

Trakanon Raider
2,635
1,214
Moral compass, hell -- I'd settle for Asimovs Three Laws as a starting point.

I wonder if in Asimovs fictional universe the guy who originally proposed the Three Laws was mercilessly mocked for the rest of his life as a nut case and conspiracy theorist.

Asimov's robots books are all about how the three laws don't work in practice, this is like saying you want to adopt Jurassic Park's safety protocols 8)

There's so many assumptions built in about AI cognition and human behavior - if some random guy tells your robot 'Jump off the bridge or I'm going to kill myself' do you want your, presumably expensive, possibly sapient robot to destroy itself, or do you want it to ignore its primary directives if it thinks a human might be lying ?

I'd bet there are plenty of people who would be perfectly happy with it ignoring said random guy even if he *was* going to kill himself.

This is how you end up with your robot murdering your dog because it thought your dog might bite someone.