Chat GPT AI

  • Guest, it's time once again for the massively important and exciting FoH Asshat Tournament!



    Go here and give us your nominations!
    Who's been the biggest Asshat in the last year? Give us your worst ones!

Tuco

I got Tuco'd!
<Gold Donor>
47,360
80,745
The models aren't the hardest part, it's the data tagging, labeling, classification, validation. Especially with ingestion of new data if you hope to keep yourself up to date. Wikipedia is perhaps something of an analogue here and it's chock full of misinformation. Open source is going to be equally biased by activists and other interest groups, just in different ways.

Since alternative tech platforms are often destroyed, it seems unlikely a legitimate rival product with actual investment that has controversial outputs would be allowed to continue operations.
I wonder if corps will publish datasets. Or if new standards will emerge that tag text data in useful ways to common AI models. Image datasets are super common ( 20+ Best Image Datasets for Computer Vision [2023] ). Sometimes companies will publish datasets as a way to crowdsource solutions, ex: Open Dataset – Waymo

I don't know what ChatGPT datasets look like, but I could imagine a straightforward method to annotate a book, paper, website text etc. This type of data will become public over time, along with tools to ingest it and the models that are trained for it, and the training itself.

To your point though, ideological bias is going to be pervasive in open-source exactly like you said. There's not going to be a dominant tech platform purity-testing data to feed a chatbot so it produces "impartial" output. However, I could definitely see a group sitting on top of the open-source community and just exposing a chatbot that has fewer filters.
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,968
Solutions in (pre-AI) teaching material are often incorrect.

Like Qhue said, this stuff was the academic grunt labor. It wasnt farmed out to the best and brightest.

Don't get me wrong, LLMs produce good looking bullshit a lot of the time. The thing is, that describes most of academia too. And pretty much 100% of online ""content"". Lets just simplify this a bit. Most human produced content is good looking bullshit.

I think people overestimate the value of the criticism that LLMs produce good looking bullshit. If its as good looking as the other bullshit available, and it takes much less time and money to produce, it wins.
Good or bad looking bullshit in mathematics is still bullshit. And AI produces a lot of bullshit. This isn't an undergraduate biology book that is being discussed but one where mathematically correct solutions are required. If one produces a set of hundreds of mathematical solutions using AI, how does one know these solutions are correct? One certainly can't assume they are correct because AI constantly produces incorrect answers to basic math problems.

As an aside, the online content for hard sciences at undergraduate level is excellent. How would you know that it is bad since you do not consume any?
 

Aldarion

Egg Nazi
9,723
26,646
Good or bad looking bullshit in mathematics is still bullshit. And AI produces a lot of bullshit. This isn't an undergraduate biology book that is being discussed but one where mathematically correct solutions are required. If one produces a set of hundreds of mathematical solutions using AI, how does one know these solutions are correct? One certainly can't assume they are correct because AI constantly produces incorrect answers to basic math problems.

As an aside, the online content for hard sciences at undergraduate level is excellent. How would you know that it is bad since you do not consume any?
When I said "online content" I was not referring to "online content for hard sciences at undergraduate level", I was being much more general.

Look, it all comes down to error rate. I'm just saying there were plenty of errors in existing teaching materials. The question should not be "how can AI content be useful since it makes errors", the question is "does it make more errors than human produced teaching materials?"

I'm just saying perfection isnt required, the bar is lower than that. For most content.
 

Aldarion

Egg Nazi
9,723
26,646
Right, we're saying yes, by a good margin, and it makes a type of error that is more work to fix.
OK thats a fair point but all I'm seeing is mentions of chatGPT making math errors. Did the answers from the LLM produced teaching materials turn out to be wrong?

Also, even if we rule this out for any math courses (and I think thats still debatable because its based on what a single LLM can do right now in May 2023), that still leaves a lot of other classes.
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,968
When I said "online content" I was not referring to "online content for hard sciences at undergraduate level", I was being much more general.

Look, it all comes down to error rate. I'm just saying there were plenty of errors in existing teaching materials. The question should not be "how can AI content be useful since it makes errors", the question is "does it make more errors than human produced teaching materials?"

I'm just saying perfection isnt required, the bar is lower than that. For most content.
For most content you are right. But not for physics and math.
 
  • 1Like
Reactions: 1 user

Lambourne

Ahn'Qiraj Raider
2,863
6,830
Are these the actual rules or are they just "rules" that the chatbot came up with?

It's not impossible but he said he got the exact same result from different workarounds. There's also a typo on the second line of the github bot rules that AI probably wouldn't make (asked for you name instead of asked for your name).

There is indeed a problem here in that you can never be sure you are not being deceived by a smarter / more skilled liar. Perhaps running multiple instances of the same bot could detect inconsistencies.
 

jooka

marco esquandolas
<Bronze Donator>
14,858
6,391
as far as I know, the public version of chatGPT doesn't actually understand math. Is that a purposely disabled feature or is it an ability limitation as of right now.
 

Mist

REEEEeyore
<Gold Donor>
31,197
23,359
as far as I know, the public version of chatGPT doesn't actually understand math. Is that a puposely disabled feature or is it a ability limitation as of right now.
It understands math only from a language and token manipulation standpoint. It does not 'do math.' If you want to do math, use the Wolfram Alpha plugin.

Ironically, the base ChatGPT model was better at math before the RLHF. Once the RLHF layers were applied, it actually lost the ability to understand probability very well, because the way human beings talk about probability and understand probability is so fucking bad that it poisoned the model.
 
  • 1Worf
Reactions: 1 user

Pasteton

Blackwing Lair Raider
2,733
1,919
It’s literally a generative language model , exactly what it’s called. It doesn’t understand anything it’s putting out, it is just putting out the words that based on its massive scraping of data make the most sense in that order. What’s insane to me is just how that simple principle can be used to put out so many (correct) things (but also many incorrect too). That’s also why the problem of confabulation and inaccuracy is going to be a very difficult one to solve. As far as I’m aware, it has no innate error correcting ability , from even a fundamental level. If you ask it to error correct it will just generate more bullshit based on the request
 
  • 1Like
Reactions: 1 user

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,968
Here is an example potato physics:

Q: "A diver 60 m deep in 10 degrees Celsius fresh water exhales a 1.3-cm-diameter bubble. What is the bubble's diameter just as it reaches the surface of the lake, where the water temperature is 20 degrees Celsius ? Assume that the air bubble is always in thermal equilibrium with the surrounding water."

A: "To solve this problem, we can use Boyle's law and the ideal gas law. Boyle's law states that at constant temperature, the pressure and volume of a gas are inversely proportional. The ideal gas law combines Boyle's law with the relationship between pressure, volume, and temperature.

Let's start by calculating the initial volume of the bubble at the diver's depth of 60 m. We know the initial diameter of the bubble is 1.3 cm, which means the initial radius is 0.65 cm (or 0.0065 m). We can calculate the initial volume using the formula for the volume of a sphere:

V₁ = (4/3) * π * r₁³

where V₁ is the initial volume and r₁ is the initial radius.

V₁ = (4/3) * π * (0.0065 m)³ V₁ = 0.00000008724 m³

Next, we need to calculate the final volume of the bubble at the surface of the lake. Since the air bubble is in thermal equilibrium with the surrounding water, the temperature of the bubble will also be 20 degrees Celsius. We'll use the ideal gas law to calculate the final volume.

PV = nRT

where P is the pressure, V is the volume, n is the number of moles of gas, R is the ideal gas constant, and T is the temperature.

At the diver's depth of 60 m, the pressure can be calculated using the hydrostatic pressure formula:

P₁ = ρ * g * h

where P₁ is the initial pressure, ρ is the density of water, g is the acceleration due to gravity, and h is the depth.

The density of fresh water is approximately 1000 kg/m³, and the acceleration due to gravity is approximately 9.8 m/s². Plugging in these values, we get:

P₁ = 1000 kg/m³ * 9.8 m/s² * 60 m P₁ = 588,000 Pa

Since the bubble is in thermal equilibrium with the surrounding water, the final pressure will also be 588,000 Pa.

Now, we can rearrange the ideal gas law equation to solve for the final volume:

V₂ = (P₁ * V₁ * T₂) / (P₂ * T₁)

where V₂ is the final volume, P₂ is the final pressure, T₁ is the initial temperature, and T₂ is the final temperature.

Plugging in the known values, we have:

V₂ = (588,000 Pa * 0.00000008724 m³ * (20 + 273.15) K) / (588,000 Pa * (10 + 273.15) K) V₂ ≈ 0.00000009019 m³

Finally, we can calculate the final radius of the bubble using the formula for the volume of a sphere:

V₂ = (4/3) * π * r₂³

0.00000009019 m³ = (4/3) * π * r₂³

Solving for r₂:

r₂ = (0.00000009019 m³ / ((4/3) * π))^(1/3) r₂ ≈ 0.00669 m

Therefore, the diameter of the bubble just as it reaches the surface of the lake is approximately 0.01338 m or 1.338 cm."


Now the approach it took is correct but the answer is wrong. Why is it wrong? Because P2 is not the same as P1. P1 is pressure under water and P2 is pressure at the surface. But the AI has no concept of what is what and just spews out, with 100% self certainty, an answer that is incorrect. I would not trust this thing with any sort of math, even basic shit like above.
 

jooka

marco esquandolas
<Bronze Donator>
14,858
6,391
Is that just vanilla chatGPT?


What if you tailor your chatGPT experience with plugins that fit your criteria and then limit its scope to say a whitelisted set of sites rather than the entirety of the web, how does it's accuracy fair then?
 

ToeMissile

Pronouns: zie/zhem/zer
<Gold Donor>
3,169
2,053
It’s literally a generative language model , exactly what it’s called. It doesn’t understand anything it’s putting out, it is just putting out the words that based on its massive scraping of data make the most sense in that order. What’s insane to me is just how that simple principle can be used to put out so many (correct) things (but also many incorrect too). That’s also why the problem of confabulation and inaccuracy is going to be a very difficult one to solve. As far as I’m aware, it has no innate error correcting ability , from even a fundamental level. If you ask it to error correct it will just generate more bullshit based on the request
I think it depends on the topic/prompt. Like mentioned before it generates text and is shit at any kind of computation.

This recent Lex Fridman podcast w/ Steven Wolfram touches on a lot of stuff from last set of posts here. pretty interesting IMO.

 
  • 1Like
Reactions: 1 user

Mist

REEEEeyore
<Gold Donor>
31,197
23,359
Yeah, anyone interested in this stuff should be listening to the recent Lex Fridman episodes: Sam Altman, Eliezer Yudkowsky, Max Tegmark and the one linked above.
 
  • 1Barf
  • 1Like
Reactions: 1 users

Aldarion

Egg Nazi
9,723
26,646
What’s insane to me is just how that simple principle can be used to put out so many (correct) things (but also many incorrect too).
Ever since I became aware of this stuff, every now and then I'm troubled by the thought that maybe human consciousness operates exactly the same way, with just a few superficial layers on top of it.

These things just work so well. And remind me so much of a human bullshitter. And a human bullshitter is just someone I can tell is bullshitting me, i.e. a bad bullshitter.

LLMs produce such convincing bullshit I can't shake the notion that maybe what we're doing is fundamentally the same thing.
 
  • 2Like
Reactions: 1 users

Sanrith Descartes

You have insufficient privileges to reply here.
<Aristocrat╭ರ_•́>
44,495
120,676
Michio Kaku was on Rogan last week. He discussed chatboys a lot. He didmt sound like a fan. He said it's wrong so often because when queried, it just splices together snippets of stuff it finds. It doesn't know or care that it's correct or some bullshit it finds posted on FOH.
 

Mist

REEEEeyore
<Gold Donor>
31,197
23,359
Ever since I became aware of this stuff, every now and then I'm troubled by the thought that maybe human consciousness operates exactly the same way, with just a few superficial layers on top of it.
No. Human language acquisition works somewhat the same way, with a bunch of additional layers. These things don't even have a knowledge schema, they are not anywhere close to a brain. Integrated Consciousness is very clearly something else entirely.

What surprises me more is AI image generation. "Is this how dreams work?" is a very real experience.
 

Mist

REEEEeyore
<Gold Donor>
31,197
23,359
Michio Kaku was on Rogan last week. He discussed chatboys a lot. He didmt sound like a fan. He said it's wrong so often because when queried, it just splices together snippets of stuff it finds. It doesn't know or care that it's correct or some bullshit it finds posted on FOH.
It's not going out and 'finding' anything. The GPT model is just a single number, billions of digits long, that represents a statistical model of how every possible token aka word relates in frequency to the word before it. This number was generated from a pre-training run. There's other application layers ontop of that model to make it useful to human users.
 
  • 1Like
Reactions: 1 user