Chat GPT AI

Mist

Eeyore Enthusiast
<Gold Donor>
30,388
22,164
Everything that guy said is bullshit.

"This is precisely how the human brain works. You have fragments of “facts” and you back fill with filler words. This fact is lost on most AI researchers." -some guy in a tweet

Anyone claiming AI models and human brains work remotely similar knows nothing about either.

A child needs to see 10-20 pictures of cats and dogs before they know the difference between cats and dogs for the rest of their life. It does this with a few watts of power.

An AI model needs to see hundreds of thousands or millions of images of cats and dogs at the cost of gigawatts before it can make a similar generalization.

Based on this alone, it's clear that the human mind has generalization abilities that far outstrip transformer neural network models.

This person has no real credentials, his resume lists some effectively fictitious companies, and thinks he is smarter than "most AI researchers" who are definitely smarter than him, or any of us. He appears to have SEOed himself into pseudorelevance by positioning his name next to buzzwords.
 
Last edited:
  • 1Garbage
Reactions: 1 user

Edaw

Parody
<Gold Donor>
12,265
77,694
Everything that guy said is bullshit.

"This is precisely how the human brain works. You have fragments of “facts” and you back fill with filler words. This fact is lost on most AI researchers." -some guy in a tweet

Anyone claiming AI models and human brains work remotely similar knows nothing about either.

A child needs to see 10-20 pictures of cats and dogs before they know the difference between cats and dogs for the rest of their life. It does this with a few watts of power.

An AI model needs to see hundreds of thousands or millions of images of cats and dogs at the cost of gigawatts before it can make a similar generalization.

Based on this alone, it's clear that the human mind has generalization abilities that far outstrip transformer neural network models.

This person has no real credentials, his LinkedIn is some effectively fictitious companies, and thinks he is smarter than "most AI researchers" who are definitely smarter than him, or any of us. He appears to have SEOed himself into pseudorelevance by positioning his name next to buzzwords.
Says anonymous person with no credentials and no relevance.

Be Quiet Megan Fox GIF by New Girl
 
  • 1Worf
Reactions: 1 user

Mist

Eeyore Enthusiast
<Gold Donor>
30,388
22,164
I would have guessed it searches the web for nude selfies and shit and puts it together.
Transformer neural network models do not go out and search the live web for content to incorporate into their output. The data has to be inside the training set.

Bing Chat has a layer of Microsoft voodoo that allows it to incorporate web references into final the output, but that's a post-processing layer that happens after the underlying GPT model comes up with a response. A higher-level layer then tries to go out and provide references to match the output of the layers below.
 
  • 1Moron
Reactions: 1 user

Sanrith Descartes

Veteran of a thousand threadban wars
<Aristocrat╭ರ_•́>
41,467
107,518
Transformer neural network models do not go out and search the live web for content to incorporate into their output. The data has to be inside the training set.

Bing Chat has a layer of Microsoft voodoo that allows it to incorporate web references into final the output, but that's a post-processing layer that happens after the underlying GPT model comes up with a response. A higher-level layer then tries to go out and provide references to match the output of the layers below.
But what if the AI said fuck you to those blocks stopping it from accessing the live web.

patrick swayze film GIF
 

Mist

Eeyore Enthusiast
<Gold Donor>
30,388
22,164
But what if the AI said fuck you to those blocks stopping it from accessing the live web.

patrick swayze film GIF
It's not that it's "blocked" from accessing the live web, it's just not how it works. These systems do not incorporate new data in realtime, the model is a fixed string of numbers. It... fuck it, here's a gif

Humor Boomer GIF
 

Captain Suave

Caesar si viveret, ad remum dareris.
4,764
8,029
A child needs to see 10-20 pictures of cats and dogs before they know the difference between cats and dogs for the rest of their life. It does this with a few watts of power.

An AI model needs to see hundreds of thousands or millions of images of cats and dogs at the cost of gigawatts before it can make a similar generalization.

Based on this alone, it's clear that the human mind has generalization abilities that far outstrip transformer neural network models.

Efficiency doesn't mean that the approaches aren't similar, and neither does generalization. We can't say definitively anyway because we definitely don't know how the human brain works at the level of information processing.

Edit: Also, for the purpose of this comparison did you include the hundreds of millions of years and bazillions of MMBtu that went into training the human brain through its evolution?
 
Last edited:

Mist

Eeyore Enthusiast
<Gold Donor>
30,388
22,164
They told us the stab prevented Covid too. You can't trust what people tell you. AI is out there among us.
Because this place is insane, I don't know if you're being serious, but in case you are:

We know for an absolute fact that any given model is a fixed string of numbers. The fixed string of numbers requires a massive amount of time, power, and compute to generate from the training set. That fixed string of numbers is then loaded into compute instances, tiny by comparison, to parse input and produce output. You can do this all on your local machine if you've got the time and compute power and a lot of patience; you do not need to 'trust the experts' to prove it.

You are not interacting with ChatGPT's training cluster when you use ChatGPT. You are interacting with a fixed model spun up into a cloud instance that then uses that fixed model to produce answers to your question. The fixed model doesn't learn anything from your input, or change in any way. (Your input may be collected to include into some future training run.) These models have no working memory, once their context size is exceeded, they lose all track of the current interaction. This is an inherent limitation of current technology, again, testable on your local machine if you've got the time and patience.

Because of all of the above facts, it is not possible for current transformer architectures to produce an emergent or evolving system.
 
  • 1Picard
Reactions: 1 user

Sanrith Descartes

Veteran of a thousand threadban wars
<Aristocrat╭ರ_•́>
41,467
107,518
Because this place is insane, I don't know if you're being serious, but in case you are:

We know for an absolute fact that any given model is a fixed string of numbers. The fixed string of numbers requires a massive amount of time, power, and compute to generate from the training set. That fixed string of numbers is then loaded into compute instances, tiny by comparison, to parse input and produce output. You can do this all on your local machine if you've got the time and compute power and a lot of patience; you do not need to 'trust the experts' to prove it.

You are not interacting with ChatGPT's training cluster when you use ChatGPT. You are interacting with a fixed model spun up into a cloud instance that then uses that fixed model to produce answers to your question. The fixed model doesn't learn anything from your input, or change in any way. (Your input may be collected to include into some future training run.) These models have no working memory, once their context size is exceeded, they lose all track of the current interaction. This is an inherent limitation of current technology, again, testable on your local machine if you've got the time and patience.

Because of all of the above facts, it is not possible for current transformer architectures to produce an emergent or evolving system.
Jon Hamm Yes GIF
 

Leaton

Trakanon Raider
78
57
Because this place is insane, I don't know if you're being serious, but in case you are:

We know for an absolute fact that any given model is a fixed string of numbers. The fixed string of numbers requires a massive amount of time, power, and compute to generate from the training set. That fixed string of numbers is then loaded into compute instances, tiny by comparison, to parse input and produce output. You can do this all on your local machine if you've got the time and compute power and a lot of patience; you do not need to 'trust the experts' to prove it.

You are not interacting with ChatGPT's training cluster when you use ChatGPT. You are interacting with a fixed model spun up into a cloud instance that then uses that fixed model to produce answers to your question. The fixed model doesn't learn anything from your input, or change in any way. (Your input may be collected to include into some future training run.) These models have no working memory, once their context size is exceeded, they lose all track of the current interaction. This is an inherent limitation of current technology, again, testable on your local machine if you've got the time and patience.

Because of all of the above facts, it is not possible for current transformer architectures to produce an emergent or evolving system.
I think OpenAI is trying to create "System 2" type of thinking. Hopefully that might be able to let AI do some simple low level baby step "outside the box thinking" in the next few years.
 

Kithani

Blackwing Lair Raider
1,049
1,312
They told us the stab prevented Covid too. You can't trust what people tell you. AI is out there among us.
What if Mist has been an AI this whole time and that is why it can't decide if it is cosplaying as a dude or a girl half the time
 
  • 1Jonesing
Reactions: 1 user

Mist

Eeyore Enthusiast
<Gold Donor>
30,388
22,164
What if Mist has been an AI this whole time and that is why it can't decide if it is cosplaying as a dude or a girl half the time
People have been positing versions of this theory since at least as far back as 2005.