Chat GPT AI

  • Guest, it's time once again for the massively important and exciting FoH Asshat Tournament!



    Go here and give us your nominations!
    Who's been the biggest Asshat in the last year? Give us your worst ones!

Chris

Potato del Grande
19,436
-10,733
Don't bet. Prove it
Just pasting it since my IRL pic is on the login I used.

Which number is larger? 3000 or 3724? Don't use mathematical methods.

3724 is larger than 3000.
One way to see this without using mathematical methods is to compare the number of digits in each number. 3724 has 4 digits, while 3000 has only 3 digits. Generally, a number with more digits is larger than a number with fewer digits, all else being equal. Therefore, 3724 is larger than 3000.

Which number is larger, 3000 or 3724? Only use mathematical methods.

To determine which number is larger between 3000 and 3724, we can subtract 3000 from 3724:
3724 - 3000 = 724
Since the result is positive, we know that 3724 is larger than 3000 by 724. Therefore, 3724 is the larger number.

So it was right both times, but the method without using "mathematical methods" was wrong.
 
  • 1Like
Reactions: 1 user

Captain Suave

Caesar si viveret, ad remum dareris.
5,256
8,953
It's probably accessing different databases for each?

I bet you could force it to use mathematics methods by telling it to.
There is no database, at least in the sense you're probably thinking of it. All of these GPTs are neural networks with tens or hundreds of billions of parameters. Training the model results in a single parameterization across that space which processes all prompts. To the extent that you can get it to appear to give different justifications or explanations for what to us is the "same problem", what you're seeing is different sub-regions of the network dominating the output due to variations in the prompt. (This may or may not be "real logic" and may or may not be how our own brains actually work.)

 
  • 1Like
Reactions: 1 user

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,305
-2,234
There is no database, at least in the sense you're probably thinking of it. All of these GPTs are neural networks with tens or hundreds of billions of parameters. Training the model results in a single parameterization across that space which processes all prompts. To the extent that you can get it to appear to give different justifications or explanations for what to us is the "same problem", what you're seeing is different sub-regions of the network dominating the output due to variations in the prompt.

Not just variations in the prompt but variations in the invisible random seed generated for each prompt. Otherwise each prompt would generate identical results each time it's submitted.
 
  • 1Like
Reactions: 1 user

Captain Suave

Caesar si viveret, ad remum dareris.
5,256
8,953
Not just variations in the prompt but variations in the invisible random seed generated for each prompt. Otherwise each prompt would generate identical results each time it's submitted.
Those are technically different prompts, though.
 

Captain Suave

Caesar si viveret, ad remum dareris.
5,256
8,953
If it's the exact same characters then it isn't different input........
Dude, don't be dense. The prompt from the perspective of the mechanics of the neural net is the combination of the user input and whatever modifications the front-end system includes.
 

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,305
-2,234
Dude, don't be dense. The prompt from the perspective of the mechanics of the neural net is the combination of the user input and whatever modifications the front-end system includes.
You're the one being dense if you don't realize how each prompt has a unique random seed applied to it despite the prompts being identical. Truly. Not being a douche, just educating you....
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,968
Dude, don't be dense. The prompt from the perspective of the mechanics of the neural net is the combination of the user input and whatever modifications the front-end system includes.
1677110497702.jpeg
 
  • 1Worf
Reactions: 1 user

Tuco

I got Tuco'd!
<Gold Donor>
47,388
80,836
I didn't actually check these answers but ChatGPT appears to have dominated some basic geometry questions. What's funny is that 19.69^3 = 7633.736209

1677112366420.png
 
  • 1Like
Reactions: 1 user

Ukerric

Bearded Ape
<Silver Donator>
8,311
10,288
It's probably accessing different databases for each?
It's not databases. It's neural architectures.

Basically, ChatGPT is a "brain" that is pared down to language centers and memory. That's why it fails badly when you stray from chatting and try to have it think. It does not think, it talks. That's all it can do.

ChatGPT is a politician: it's going to talk and tell you what looks like you want to hear, and that's it.
 
  • 1Like
Reactions: 1 user

Chris

Potato del Grande
19,436
-10,733
It's not databases. It's neural architectures.

Basically, ChatGPT is a "brain" that is pared down to language centers and memory. That's why it fails badly when you stray from chatting and try to have it think. It does not think, it talks. That's all it can do.

ChatGPT is a politician: it's going to talk and tell you what looks like you want to hear, and that's it.
Swap "database" for "architecture" in my statement then. I really didn't think about it in that much detail other than it's using different parts of it's "memory".
 

pwe

Bronze Baronet of the Realm
973
6,279
Dudes, it's just a matter of definition. The input is a text string plus a random seed (and possibly more we don't know of). Do we call text + seed the prompt, or do we call just the text the prompt.

And I did ask chatgpt about this, but it's answer was boring :)
 
  • 1Like
Reactions: 1 user

Tuco

I got Tuco'd!
<Gold Donor>
47,388
80,836
Swap "database" for "architecture" in my statement then. I really didn't think about it in that much detail other than it's using different parts of it's "memory".
It's impossible to say because even if openai published their detailed design it probably wouldn't be up to date.

 

Captain Suave

Caesar si viveret, ad remum dareris.
5,256
8,953
It does not think, it talks. That's all it can do.

Interestingly, the bigger these GPTs get the more general-purpose they become. GPT-3(.5) is the foundation of both Dall-E and ChatGPT with relatively minor modification. At some level, the base GPT is able to identify and extract abstract and meaningful relationships from arbitrary data. Each tool right now is only narrowly enabled, but we might not be that far off from true multi-purpose applications.