I remember when Bing first rolled out, they failed to put in any NSFW defaults and videos were set to autoplay. That first couple of days were pretty funny.They reportedly disabled it regards to NSFW stuff, so what’s the point?
I remember when Bing first rolled out, they failed to put in any NSFW defaults and videos were set to autoplay. That first couple of days were pretty funny.They reportedly disabled it regards to NSFW stuff, so what’s the point?
Be more impressed if it said you need to meet new friends that eat meat.Works with their sample question though (see the right panel)
View attachment 457780
View attachment 457782
It's a potato.So assuming the machine learning thing, going forward would it know the answer to the first query if asked again? Im trying to follow how response one and response two can both be valid. It didnt have the info on response one but had it in response two.
View attachment 457942
So what, did we get the diversity hire AI?
So assuming the machine learning thing, going forward would it know the answer to the first query if asked again? Im trying to follow how response one and response two can both be valid. It didnt have the info on response one but had it in response two.
View attachment 457942
Much like forum posters.Because it doesn't know anything or have a mechanism to detect or correct what we see as inconsistencies in responses. It's just producing words.
Whats surprising to me is how bad it is at the most direct factual queries. It really does appear to be purely a nonsense generating algorithm.Because it doesn't know anything or have a mechanism to detect or correct what we see as inconsistencies in responses. It's just producing words.
Query said:list 3 Google scholar links to studies about covid-19 vaccines
None of the listed papers exist.chatGPT said:Efficacy and safety of the BNT162b2 mRNA COVID-19 vaccine: an interim analysis of a randomised, double-blind, placebo-controlled phase 3 trial. Spatial-Frequency Feature Learning and Classification of Motor Imagery EEG Based on Deep Convolution Neural Network
AstraZeneca COVID-19 Vaccine: Interim Clinical Trial Results. Social Support and Depressive Outcomes in Older Adults: An Analysis of the Health and Retirement Study
Safety and Efficacy of the mRNA-1273 SARS-CoV-2 Vaccine. High-Resolution Thermal Imaging and Analysis of TIG Weld Pool Phase Transitions
I'm surprised that they lead anywhere. I would have expected garbage in the format of a valid HTTP link.The links lead to other, unrelated studies.
This also shows the wisdom of giving this to the global masses to play with. Millions of alpha testers are invaluable.I'm surprised that they lead anywhere. I would have expected garbage in the format of a valid HTTP link.
Honestly, the great success of this round of LLMs is that people instinctively want to hold them to human standards, assuming that they are in some procedural sense understanding and answering our questions. The proper expectation is to think of each request as "Please produce a series of characters that takes the form of training data, given the following prompt:" It's more or less an accident of scale that we get anything that makes sense to a person.
At some level this probably is how our brains work, but we have many layers of secondary filtering systems and expectation modeling in addition to real-time re-training, none of which these models will posses for years. What ChatGPT is now is something like a human in whom these systems are broken - so a bullshitting sociopath vomiting out the first thing that has the structure of a sensible response, regardless of content or consistency.
- Start an EvilElon twitter account.
- Every tweet he does, you generate a response that assumes his is good.
His latest tweet:
A taste:
View attachment 457982
Sometimes you can just keep jamming the Edit and Resubmit buttons to get it to spit out something accurate (which is useless since you need to be able to, as a user, recognize when it's giving you something inaccurate). The more generalized the topics you ask about, the more likely it is to be accurate (again, the thing can pass the MBA). The more specialized the query tho, especially when it comes to things like theoretical science, the more likely it is to be bullshit. And then of course, all it's training data is a couple years old.