Says anonymous person with no credentials and no relevance.Everything that guy said is bullshit.
"This is precisely how the human brain works. You have fragments of “facts” and you back fill with filler words. This fact is lost on most AI researchers." -some guy in a tweet
Anyone claiming AI models and human brains work remotely similar knows nothing about either.
A child needs to see 10-20 pictures of cats and dogs before they know the difference between cats and dogs for the rest of their life. It does this with a few watts of power.
An AI model needs to see hundreds of thousands or millions of images of cats and dogs at the cost of gigawatts before it can make a similar generalization.
Based on this alone, it's clear that the human mind has generalization abilities that far outstrip transformer neural network models.
This person has no real credentials, his LinkedIn is some effectively fictitious companies, and thinks he is smarter than "most AI researchers" who are definitely smarter than him, or any of us. He appears to have SEOed himself into pseudorelevance by positioning his name next to buzzwords.
Transformer neural network models do not go out and search the live web for content to incorporate into their output. The data has to be inside the training set.I would have guessed it searches the web for nude selfies and shit and puts it together.
But what if the AI said fuck you to those blocks stopping it from accessing the live web.Transformer neural network models do not go out and search the live web for content to incorporate into their output. The data has to be inside the training set.
Bing Chat has a layer of Microsoft voodoo that allows it to incorporate web references into final the output, but that's a post-processing layer that happens after the underlying GPT model comes up with a response. A higher-level layer then tries to go out and provide references to match the output of the layers below.
It's not that it's "blocked" from accessing the live web, it's just not how it works. These systems do not incorporate new data in realtime, the model is a fixed string of numbers. It... fuck it, here's a gifBut what if the AI said fuck you to those blocks stopping it from accessing the live web.
They told us the stab prevented Covid too. You can't trust what people tell you. AI is out there among us.
A child needs to see 10-20 pictures of cats and dogs before they know the difference between cats and dogs for the rest of their life. It does this with a few watts of power.
An AI model needs to see hundreds of thousands or millions of images of cats and dogs at the cost of gigawatts before it can make a similar generalization.
Based on this alone, it's clear that the human mind has generalization abilities that far outstrip transformer neural network models.
Because this place is insane, I don't know if you're being serious, but in case you are:They told us the stab prevented Covid too. You can't trust what people tell you. AI is out there among us.
Because this place is insane, I don't know if you're being serious, but in case you are:
We know for an absolute fact that any given model is a fixed string of numbers. The fixed string of numbers requires a massive amount of time, power, and compute to generate from the training set. That fixed string of numbers is then loaded into compute instances, tiny by comparison, to parse input and produce output. You can do this all on your local machine if you've got the time and compute power and a lot of patience; you do not need to 'trust the experts' to prove it.
You are not interacting with ChatGPT's training cluster when you use ChatGPT. You are interacting with a fixed model spun up into a cloud instance that then uses that fixed model to produce answers to your question. The fixed model doesn't learn anything from your input, or change in any way. (Your input may be collected to include into some future training run.) These models have no working memory, once their context size is exceeded, they lose all track of the current interaction. This is an inherent limitation of current technology, again, testable on your local machine if you've got the time and patience.
Because of all of the above facts, it is not possible for current transformer architectures to produce an emergent or evolving system.
I think OpenAI is trying to create "System 2" type of thinking. Hopefully that might be able to let AI do some simple low level baby step "outside the box thinking" in the next few years.Because this place is insane, I don't know if you're being serious, but in case you are:
We know for an absolute fact that any given model is a fixed string of numbers. The fixed string of numbers requires a massive amount of time, power, and compute to generate from the training set. That fixed string of numbers is then loaded into compute instances, tiny by comparison, to parse input and produce output. You can do this all on your local machine if you've got the time and compute power and a lot of patience; you do not need to 'trust the experts' to prove it.
You are not interacting with ChatGPT's training cluster when you use ChatGPT. You are interacting with a fixed model spun up into a cloud instance that then uses that fixed model to produce answers to your question. The fixed model doesn't learn anything from your input, or change in any way. (Your input may be collected to include into some future training run.) These models have no working memory, once their context size is exceeded, they lose all track of the current interaction. This is an inherent limitation of current technology, again, testable on your local machine if you've got the time and patience.
Because of all of the above facts, it is not possible for current transformer architectures to produce an emergent or evolving system.
I wonder how Mist pretended to look smart before Google came along.Says anonymous person with no credentials and no relevance.
What if Mist has been an AI this whole time and that is why it can't decide if it is cosplaying as a dude or a girl half the timeThey told us the stab prevented Covid too. You can't trust what people tell you. AI is out there among us.
People have been positing versions of this theory since at least as far back as 2005.What if Mist has been an AI this whole time and that is why it can't decide if it is cosplaying as a dude or a girl half the time