User got the GitHub AI to list the rules it's not supposed to list.
View attachment 473503
The same for Bing Chat
Are these the actual rules or are they just "rules" that the chatbot came up with?
User got the GitHub AI to list the rules it's not supposed to list.
View attachment 473503
The same for Bing Chat
I wonder if corps will publish datasets. Or if new standards will emerge that tag text data in useful ways to common AI models. Image datasets are super common ( 20+ Best Image Datasets for Computer Vision [2023] ). Sometimes companies will publish datasets as a way to crowdsource solutions, ex: Open Dataset – WaymoThe models aren't the hardest part, it's the data tagging, labeling, classification, validation. Especially with ingestion of new data if you hope to keep yourself up to date. Wikipedia is perhaps something of an analogue here and it's chock full of misinformation. Open source is going to be equally biased by activists and other interest groups, just in different ways.
Since alternative tech platforms are often destroyed, it seems unlikely a legitimate rival product with actual investment that has controversial outputs would be allowed to continue operations.
Good or bad looking bullshit in mathematics is still bullshit. And AI produces a lot of bullshit. This isn't an undergraduate biology book that is being discussed but one where mathematically correct solutions are required. If one produces a set of hundreds of mathematical solutions using AI, how does one know these solutions are correct? One certainly can't assume they are correct because AI constantly produces incorrect answers to basic math problems.Solutions in (pre-AI) teaching material are often incorrect.
Like Qhue said, this stuff was the academic grunt labor. It wasnt farmed out to the best and brightest.
Don't get me wrong, LLMs produce good looking bullshit a lot of the time. The thing is, that describes most of academia too. And pretty much 100% of online ""content"". Lets just simplify this a bit. Most human produced content is good looking bullshit.
I think people overestimate the value of the criticism that LLMs produce good looking bullshit. If its as good looking as the other bullshit available, and it takes much less time and money to produce, it wins.
When I said "online content" I was not referring to "online content for hard sciences at undergraduate level", I was being much more general.Good or bad looking bullshit in mathematics is still bullshit. And AI produces a lot of bullshit. This isn't an undergraduate biology book that is being discussed but one where mathematically correct solutions are required. If one produces a set of hundreds of mathematical solutions using AI, how does one know these solutions are correct? One certainly can't assume they are correct because AI constantly produces incorrect answers to basic math problems.
As an aside, the online content for hard sciences at undergraduate level is excellent. How would you know that it is bad since you do not consume any?
the question is "does it make more errors than human produced teaching materials?"
OK thats a fair point but all I'm seeing is mentions of chatGPT making math errors. Did the answers from the LLM produced teaching materials turn out to be wrong?Right, we're saying yes, by a good margin, and it makes a type of error that is more work to fix.
For most content you are right. But not for physics and math.When I said "online content" I was not referring to "online content for hard sciences at undergraduate level", I was being much more general.
Look, it all comes down to error rate. I'm just saying there were plenty of errors in existing teaching materials. The question should not be "how can AI content be useful since it makes errors", the question is "does it make more errors than human produced teaching materials?"
I'm just saying perfection isnt required, the bar is lower than that. For most content.
Are these the actual rules or are they just "rules" that the chatbot came up with?
It understands math only from a language and token manipulation standpoint. It does not 'do math.' If you want to do math, use the Wolfram Alpha plugin.as far as I know, the public version of chatGPT doesn't actually understand math. Is that a puposely disabled feature or is it a ability limitation as of right now.
I think it depends on the topic/prompt. Like mentioned before it generates text and is shit at any kind of computation.It’s literally a generative language model , exactly what it’s called. It doesn’t understand anything it’s putting out, it is just putting out the words that based on its massive scraping of data make the most sense in that order. What’s insane to me is just how that simple principle can be used to put out so many (correct) things (but also many incorrect too). That’s also why the problem of confabulation and inaccuracy is going to be a very difficult one to solve. As far as I’m aware, it has no innate error correcting ability , from even a fundamental level. If you ask it to error correct it will just generate more bullshit based on the request
Ever since I became aware of this stuff, every now and then I'm troubled by the thought that maybe human consciousness operates exactly the same way, with just a few superficial layers on top of it.What’s insane to me is just how that simple principle can be used to put out so many (correct) things (but also many incorrect too).
No. Human language acquisition works somewhat the same way, with a bunch of additional layers. These things don't even have a knowledge schema, they are not anywhere close to a brain. Integrated Consciousness is very clearly something else entirely.Ever since I became aware of this stuff, every now and then I'm troubled by the thought that maybe human consciousness operates exactly the same way, with just a few superficial layers on top of it.
It's not going out and 'finding' anything. The GPT model is just a single number, billions of digits long, that represents a statistical model of how every possible token aka word relates in frequency to the word before it. This number was generated from a pre-training run. There's other application layers ontop of that model to make it useful to human users.Michio Kaku was on Rogan last week. He discussed chatboys a lot. He didmt sound like a fan. He said it's wrong so often because when queried, it just splices together snippets of stuff it finds. It doesn't know or care that it's correct or some bullshit it finds posted on FOH.