I'm surprised that they lead anywhere. I would have expected garbage in the format of a valid HTTP link.
Honestly, the great success of this round of LLMs is that people instinctively want to hold them to human standards, assuming that they are in some procedural sense understanding and answering our questions. The proper expectation is to think of each request as "Please produce a series of characters that takes the form of training data, given the following prompt:" It's more or less an accident of scale that we get anything that makes sense to a person.
At some level this probably is how our brains work, but we have many layers of secondary filtering systems and expectation modeling in addition to real-time re-training, none of which these models will posses for years. What ChatGPT is now is something like a human in whom these systems are broken - so a bullshitting sociopath vomiting out the first thing that has the structure of a sensible response, regardless of content or consistency.