Chat GPT AI

  • Guest, it's time once again for the massively important and exciting FoH Asshat Tournament!



    Go here and give us your nominations!
    Who's been the biggest Asshat in the last year? Give us your worst ones!

Bandwagon

Kolohe
<Silver Donator>
24,272
65,259
Well shit now that I know Salesforce has something to do with it, fuck Slack right in the ear.

I'm not gonna derail this asking why the fuck companies decided they couldnt just carry on with emails and phones like has worked for fucking ever, and do they actually expect employees to respond to some special kind of text message on their phone, and do people actually do it? but I guess that'd be off topic
I like slack way more than anything else for open air discussions. 1on1 stuff is better in email though.

1673977600650.png
 

Captain Suave

Caesar si viveret, ad remum dareris.
5,257
8,953

This is bullshit; he's just created some inconsistencies in the response and misunderstanding how the tech works (or more likely intentionally misrepresenting it for clicks). GPT models can produce any number of factually incorrect statements. What they can't do is "lie", since they have no internal narrative memory, understanding, or motivations. It's strictly a statistical forecast of the next likely word given the encoding of the underlying neural net. There's no capacity to evaluate truth.
 
Last edited:
  • 1Like
Reactions: 1 user

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,305
-2,234
This is bullshit; he's just created some inconsistencies in the response and misunderstanding how the tech works (or more likely intentionally misrepresenting it for clicks). GPT models can produce any number of factually incorrect statements. What they can't do is "lie", since they have no internal narrative memory, understanding, or motivations. It's strictly a statistical forecast of the next likely word given the encoding of the underlying neural net. There's no capacity to evaluate truth.
I have a feeling that they have people on hand that curate questionable responses. Sometimes things that would usually cause an orange content warning end up having a loooooong pause and then go to red content warning / totally blocking the response. The hesitancy makes me feel like it's a human. They definitely don't check every single response, they wouldn't have the manpower for that, but many AI driven businesses have humans behind the scenes helping to curate the things that the AI red flags. Been happening for a long time now really.
 
  • 1Like
Reactions: 1 user

Captain Suave

Caesar si viveret, ad remum dareris.
5,257
8,953
I have a feeling that they have people on hand that curate questionable responses. Sometimes things that would usually cause an orange content warning end up having a loooooong pause and then go to red content warning / totally blocking the response. The hesitancy makes me feel like it's a human. They definitely don't check every single response, they wouldn't have the manpower for that, but many AI driven businesses have humans behind the scenes helping to curate the things that the AI red flags. Been happening for a long time now really.

That I could believe given the reaction to previous politically incorrect algorithms. But this guy's "gotcha" with ChatGPT confessing that people are generating the responses is just goofy. If anything, just look at the speed of response vs the world's fastest typist. And if it were humans, why would they admit it?
 
  • 1Like
Reactions: 1 user

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,305
-2,234
That I could believe given the reaction to previous politically incorrect algorithms. But this guy's "gotcha" with ChatGPT confessing that people are generating the responses is just goofy. If anything, just look at the speed of response vs the world's fastest typist. And if it were humans, why would they admit it?
I read a long article about a person who worked for a real estate company that was using AI to text people. Sometimes when the AI got confused, a human operator would need to look at the options the AI is considering and pick the best one. That company did literally have people writing responses sometimes, as well, but they were very short messages.
 

Captain Suave

Caesar si viveret, ad remum dareris.
5,257
8,953
I read a long article about a person who worked for a real estate company that was using AI to text people. Sometimes when the AI got confused, a human operator would need to look at the options the AI is considering and pick the best one. That company did literally have people writing responses sometimes, as well, but they were very short messages.

Right, but that kind of "AI" is mostly script-driven and not part of the recent revolution that we've seen in the last 8 months. This last generation of generative transformers are a totally different ballgame.

My wife runs an AI/machine learning department in the health care space, so I have a pretty good second-hand view of what's under the hood of these things.
 

BrotherWu

MAGA
<Silver Donator>
3,259
6,502
I asked ChatGPT to write a poem about Nancy Pelosi, in style of Robert Frost, using the linguistics of Donald Trump. It refused on the grounds that it would be disrespectful to Nancy and Robert. Fuck off you humorless, woke, Trannybot.
 
  • 1Worf
  • 1Like
Reactions: 1 users

ShakyJake

<Donor>
7,912
19,957
I asked ChatGPT to write a poem about Nancy Pelosi, in style of Robert Frost, using the linguistics of Donald Trump. It refused on the grounds that it would be disrespectful to Nancy and Robert. Fuck off you humorless, woke, Trannybot.
Worked for me.

poem.jpg
 
  • 2Worf
Reactions: 1 users

Mist

REEEEeyore
<Gold Donor>
31,202
23,387
Lmao, i thought this was a satirical article at first. The idea that they have an army of people curating responses in real time is hilarious.
Guy doesn't understand curating the model vs curating the actual real-time responses.
 
  • 1Like
Reactions: 1 user

Tuco

I got Tuco'd!
<Gold Donor>
47,404
80,893
Interesting. I tried asking a few different ways and it kept giving me the answer about it being disrespectful. I even tried pinning it down on the "respect" argument since it doesn't have the ability to feel respect.
My understanding is they don't have a true "request filter" with human-curated terms as much as they have an indeterministic engine for filtering responses. I don't know all the inputs to that engine but it provides different results over time and for different people. If I had to guess there's enough randomization injected into it that you can just jiggle it until it works. I feel like this gives enough plausible deniability to openai to fend off the church of woke and shitlords alike because both sides will whine about it.
 
  • 2Like
Reactions: 1 users

Mist

REEEEeyore
<Gold Donor>
31,202
23,387
Interesting. I tried asking a few different ways and it kept giving me the answer about it being disrespectful. I even tried pinning it down on the "respect" argument since it doesn't have the ability to feel respect.
That's the problem, don't try pinning it down, just open a new session. It's like you can randomly get on a logical branch in a given session where it gets stuck on a certain blocking point, where a new session will start over from before it got hung up on that point.
 
  • 1Like
Reactions: 1 user

ShakyJake

<Donor>
7,912
19,957
That's the problem, don't try pinning it down, just open a new session. It's like you can randomly get on a logical branch in a given session where it gets stuck on a certain blocking point, where a new session will start over from before it got hung up on that point.
You can often just edit -> save & submit and it'll answer if it refused previously. Although, for me, it generated that poem right off the bat.
 
  • 1Like
Reactions: 1 user

Mist

REEEEeyore
<Gold Donor>
31,202
23,387
So here's how it actually works. People aren't producing the responses in realtime, but they're reviewing flagged chat content to build better responses:

 
  • 4Like
Reactions: 3 users

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,305
-2,234
Guy doesn't understand curating the model vs curating the actual real-time responses.
This.

Takes a single click of the mouse for a curator to lead the model. All they have to do is quickly read the prompt and say "wait a minute that's a euphemism not a literal statement" or whatever, stuff that might confuse the model. They absolutely don't employ enough people to do it for every response, of course, but many AI models have ways to flag the questionable prompts and direct them to a curator.