Chat GPT AI

  • Guest, it's time once again for the massively important and exciting FoH Asshat Tournament!



    Go here and give us your nominations!
    Who's been the biggest Asshat in the last year? Give us your worst ones!

Tuco

I got Tuco'd!
<Gold Donor>
47,358
80,736

""There was no heart and no soul," Heiderose Schmidt, 54, told the AP of the service. "The avatars showed no emotions at all, had no body language and were talking so fast and monotonously that it was very hard for me to concentrate on what they said.""

Sounds about right.
 

Edaw

Parody
<Gold Donor>
13,272
87,994


Create your very own chatbot for 'your' website.

Charlie Hunnam Reaction GIF
 

Tmac

Adventurer
<Aristocrat╭ರ_•́>
9,969
16,984
You don't understand the architecture of these models.

The layers that screen out things like racism are not the core model. Core model training is just a statistical model of word order. The core model is only built once, during the training run, and then it is one giant terabyte+ long number, until a new model is built in the next training run with a new, larger base training set.

The alignment layers are much higher up the stack than the core model, applying human feedback training. These alignment layers are constantly being adjusted through use, but the core model remains fixed. You could throw out, or replace, that layer, and apply a different layer in its place, and it would change the output but it would not change the core model.
He’s just saying that they’re wasting resources on those additional models instead of fully focusing on moving the tech forward.
 

Mist

REEEEeyore
<Gold Donor>
31,197
23,355
He’s just saying that they’re wasting resources on those additional models instead of fully focusing on moving the tech forward.
But that's not how it works either. Those layers are trained using reinforcement learning human feedback from unskilled humans. They do not require additional programming once the human feedback portion is built. The fact that more human feedback makes that layer screen out more racism is just a byproduct of the human feedback, it's not some intentional hand-coded tuning.

The human feedback portion is also the only reason the product works at all. Strip that out and you've just got a mathematically insane autocomplete engine.

The only part that's intentionally hand-coded to screen out things like racism is the pre-training dataset, built before the training run was ever started. And that just means they don't scrape the worst parts of the internet and train it into the word-order model.

The vast majority of people who are commenting on this stuff have no real idea how the architecture actually works, or what the product is even doing. The reason you don't get many racist responses is mainly because unskilled human users downvoted those responses early in the RLHF process, not because someone devoted programmer hours specifically to do so.
 

Control

Ahn'Qiraj Raider
2,983
7,879
it's not some intentional hand-coded tuning.
How they say it works isn't necessarily how it really works. If this were true, they wouldn't be able to make overnight changes when the internet figures out how to make it tell the truth.
 

Mist

REEEEeyore
<Gold Donor>
31,197
23,355
How they say it works isn't necessarily how it really works. If this were true, they wouldn't be able to make overnight changes when the internet figures out how to make it tell the truth.
Those are not changes to the model. The model cannot change without another training run, which are months long at the cost of a billion jigawatts.

Those overnight changes are to the input or presentation layer, aka the first step and the last step of processing a prompt. Not core changes, literally basic web development stuff.
 

Control

Ahn'Qiraj Raider
2,983
7,879
Those are not changes to the model. The model cannot change without another training run, which are months long at the cost of a billion jigawatts.

Those overnight changes are to the input or presentation layer, aka the first step and the last step of processing a prompt. Not core changes, literally basic web development stuff.
Ok sure, but if that isn't "intentional hand-coded tuning", I'm not sure what is. And anyway, I think the point was that it consumes resources and attention on literally making the product worse.
 

Captain Suave

Caesar si viveret, ad remum dareris.
5,253
8,953
Ok sure, but if that isn't "intentional hand-coded tuning"

I think there's a background mixing of boundaries going on about what is "ChatGPT the neural net" and "ChatGPT the user experience". Mist is correct that the neural net is largely immutable until the next major retrain, but OpenAI is obviously doing some level of ongoing filtering on input/output even if that doesn't rise to the level of "changing the model" by a strict definition.
 

Mist

REEEEeyore
<Gold Donor>
31,197
23,355
I think there's a background mixing of boundaries going on about what is "ChatGPT the neural net" and "ChatGPT the user experience". Mist is correct that the neural net is largely immutable until the next major retrain, but OpenAI is obviously doing some level of ongoing filtering on input/output even if that doesn't rise to the level of "changing the model" by a strict definition.
My point is that it isn't the bigbrain AI developer hours being spent on this stuff, it's dime-a-dozen webapp developers and unpaid/paying end-users + barely paid RLHF trainers that are contributing to that portion of the product.

No one is reinventing matrix multiplication transformer models to screen out racism. It's just basic UX stuff + ongoing RLHF.
 

Captain Suave

Caesar si viveret, ad remum dareris.
5,253
8,953
My point is that it isn't the bigbrain AI developer hours being spent on this stuff, it's dime-a-dozen webapp developers and unpaid/paying end-users + barely paid RLHF trainers that are contributing to that portion of the product.

No one is reinventing matrix multiplication transformer models to screen out racism. It's just basic UX stuff + ongoing RLHF.

Yes, I understand. I don't think the other side of the debate cares so much about the pay grade as the fact that someone deliberately did anything to drive these changes.
 

AladainAF

Best Rabbit
<Gold Donor>
12,914
31,017
TIL, ChatGPT can't add. Yet, it'll tell you that it can. Or since it was coded by leftists maybe this is correct in leftist world? Maybe it identifies as being correct? who knows.

1687811923742.png


Note that I lied when I said the sum was actually 65 million, which it agreed with me as being the correct answer even though it's 85,107,079.

1687811572003.png

1687811590673.png

1687811607608.png
 
  • 1Worf
Reactions: 1 user

Captain Suave

Caesar si viveret, ad remum dareris.
5,253
8,953
TIL, ChatGPT can't add. Yet, it'll tell you that it can. Or since it was coded by leftists maybe this is correct in leftist world? Maybe it identifies as being correct? who knows.

View attachment 479969

Note that I lied when I said the sum was actually 65 million, which it agreed with me as being the correct answer even though it's 85,107,079.

View attachment 479964
View attachment 479965
View attachment 479967

Skim back in this thread a bit. These models are best thought of as really fancy autocomplete algorithms and are not reasoning or calculation engines. At least in the current generation of ChatGPT there is no executive oversight that goes back and makes sure that rigorously factual claims are correct. The more narrow the band of allowable responses is, the worse the answers will be.

I'm using it right now to write promotional copy for my business and it's working great.
 
  • 1Like
Reactions: 1 user