Yes, exactly. But it's important to talk about the tech as it works right now not some imagined future tech.*yet
This is still just "a neat trick that happens when you upscale autocomplete by a fucking lot."
Yes, exactly. But it's important to talk about the tech as it works right now not some imagined future tech.*yet
The domain itself looks suspicious, let alone directly linking a .mp4 file.You mean other than the actual website linked right above it thats not just a straight link to the Mp4?
But outside of that, Im impressed that you run your security so low that a simple link could lead to your computer getting aids.
NPCs believing that they're NPCs.Its that a large fraction of human thought and speech is nothing more than a biological upscaled autocomplete.
Humans need nowhere near this size of a training set in order to start generating novel speech.
Not quite relevant to what I'm talking about. You're confusing knowledge vs language acquisition. Each new human is a blank slate when it comes to language tokens, phoneme acquisition, etc. Only the capacity for language is passed down genetically.Humans don't start from a blank slate. How much information have our ancestors ingested over three and a half billion years of evolution? How much energy did that process take?
You're confusing knowledge vs language acquisition.
Not quite relevant to what I'm talking about. You're confusing knowledge vs language acquisition. Each new human is a blank slate when it comes to language tokens, phoneme acquisition, etc. Only the capacity for language is passed down genetically.
You could say that each new human benefits from the optimization that has been performed on our languages over time to make them easier to acquire, but then you actually look at our languages and realize they're anything but optimized for acquisition.
This is where having an understanding of developmental psychology/developmental linguistics comes in handy for understanding vast differences between humans and LLMs.
The best piece of security is using your brain.
No, an LLM is a pre-trained model of a language or usually multiple languages. The LLM is the output of the training function, producing a very long static string of digits representing a map of the data it was trained on. That static map is then used to turn prompts into responses.An LLM is the capacity for language.
My point is that I'm not clicking on a link to "goody2.ai", ever. Nor "brain.wtf", for that matter. The original post reads like a textbook phishing email.Coincidentally, 30 seconds of checking out that website leads you to BRAIN
Proceeding from false premises, your entire post is wrong and displays no understanding of what you're talking about.
You have no idea what you're talking about, so you then asked a bullshit machine, which then agreed with me, but you lack the comprehension to realize it.Party 2 makes several points, and their correctness depends on the context and interpretation:
1. **Language Acquisition in Humans**: It's generally accepted in linguistics and developmental psychology that humans are born with the capacity for language acquisition. This capacity includes the ability to recognize and produce phonemes, words, and sentences. However, the specifics of language (e.g., vocabulary, grammar) are learned through exposure to language in the environment.
2. **Optimization of Languages**: Party 2 suggests that languages are not optimized for easy acquisition. This viewpoint is debatable. While languages naturally evolve based on cultural, social, and historical factors rather than with the specific goal of being easy to acquire, linguists do recognize patterns in language structures that facilitate learning, such as regular grammar rules and frequent vocabulary usage.
3. **Comparison with LLMs**: Party 2 draws a comparison between human language acquisition and the functioning of Large Language Models (LLMs). While LLMs are trained on vast amounts of text data to learn patterns in language, they don't acquire language in the same way humans do. LLMs are trained algorithms that process and generate text based on statistical patterns, whereas humans learn language through exposure, interaction, and cognitive development.
Overall, Party 2's statements capture some nuances of language acquisition and its intersection with LLMs, but there may be room for further clarification or refinement in their arguments.
---
Congratulations Mist...your point even requires clarity from GPT3.5
You have no idea what you're talking about, so you then asked a bullshit machine, which then agreed with me, but you lack the comprehension to realize it.
Good job, and an excellent example of why humanity is doomed and probably deserves it.