NVidia GeForce RTX 50x0 cards - 70% performance increase, but AI > you

Mist

Eeyore Enthusiast
<Gold Donor>
30,899
23,196
That's a disingenuous point. There are clearly series of games that have reduce in quality as they've acquired DEI bs. Anything Ubisoft, Bethesda, etc. I mean, it's really not arguable. Claiming is the same with asian games is silly, as that's a much more recent phenomenon which correlates with their late adoption of DEI. Like Concord. You know, the biggest flop in gaming history.

REALLY DISINGENUOUS point my friend. Almost intentional gaslighting.
You're probably one of those people that thinks programmers do art and story content.

We're talking about very different things here.

Utnayan Utnayan is trying to claim that games running like dogshit is a "DEI" thing, when A) the phenomenon is not new and long pre-dates "DEI" and B) Asian games run like dogshit and don't have "DEI" and C) Bethesda games like Elder Scrolls series have especially run like dogshit forever, for as long I've owned a computer. Ubisoft and EA games run like dogshit because they're built by 300+ person teams doing agile scrum nonsense.

Concord, for instance, ran very well on PC. Excellent port, along with all of the other major recent Sony ports, from a technical standpoint. You're conflating things that have nothing to do with each other.
 
Last edited:

DickTrickle

Definitely NOT Furor Planedefiler
13,318
15,413
Nowadays most programmers use so much abstraction they know a lot less about how to extract performance from their code. There's a lot more framework and tool usage that is designed for general purposes such that it's not going to be as effective for specific uses. I think this is true for basically every industry but certainly so for gaming.
 
  • 2Like
Reactions: 1 users

rectifryer

Molten Core Raider
167
305
You're probably one of those people that thinks programmers do art and story content.

We're talking about very different things here.
AND a typical goal post shift.

You're talking about very different things because your premise is detached from reality. You'll keep raising your pedantic assertions until you find a technicality of language then declare victory while ignoring the original principals of the argument. This leads me to believe you are a useful idiot.
 
  • 1Truth!
Reactions: 1 user

rectifryer

Molten Core Raider
167
305
Nowadays most programmers use so much abstraction they know a lot less about how to extract performance from their code. There's a lot more framework and tool usage that is designed for general purposes such that it's not going to be as effective for specific uses. I think this is true for basically every industry but certainly so for gaming.
Yes exactly. It's harder to dive in under the hood of unreal with code than it is Unity. You can blueprint easily in unreal, however.
 
  • 1Like
Reactions: 1 user

Tuco

I got Tuco'd!
<Gold Donor>
46,621
76,671
You know, I opened this thread thinking, "I hope the same group of clowns is having the same tired arguments about wokeism and DEI in yet another thread.". Thanks for not disappointing.

I installed my RTX 4080 Super last night. It's so absurdely large. It's almost the same surface area as my micro ATX motherboard (453 cm sq vs 595 cm sq). My CPU fan is also absurdely large, but so far quiet though.

1727703310142.png


With the leaked power draw numbers for RTX 5000 series you know this shit isn't getting smaller.
 

Tuco

I got Tuco'd!
<Gold Donor>
46,621
76,671
Nowadays most programmers use so much abstraction they know a lot less about how to extract performance from their code. There's a lot more framework and tool usage that is designed for general purposes such that it's not going to be as effective for specific uses. I think this is true for basically every industry but certainly so for gaming.
Unreal Engine provides expansive profiling tools but even with all that power it's still common practice in UE to effectively measure times before/after functions are called.

 
  • 1Like
Reactions: 1 user

Noodleface

A Mod Real Quick
38,150
14,859
You know, I opened this thread thinking, "I hope the same group of clowns is having the same tired arguments about wokeism and DEI in yet another thread.". Thanks for not disappointing.

I installed my RTX 4080 Super last night. It's so absurdely large. It's almost the same surface area as my micro ATX motherboard (453 cm sq vs 595 cm sq). My CPU fan is also absurdely large, but so far quiet though.

View attachment 549307

With the leaked power draw numbers for RTX 5000 series you know this shit isn't getting smaller.
My guess is at some point they're going to ditch the current GPU design and you end up with two towers. One is your GPU tower and one is your CPU tower
 
  • 1Worf
Reactions: 1 user

Furry

BROWN NOW
<Gold Donor>
20,950
26,872
Isn't 5090 supposedly smaller than the 4090? At least I think I've heard that in the rumor mills.
 

Tuco

I got Tuco'd!
<Gold Donor>
46,621
76,671
My guess is at some point they're going to ditch the current GPU design and you end up with two towers. One is your GPU tower and one is your CPU tower
I know this is a thing for laptops https://www.amazon.com/Razer-Core-Aluminum-External-Enclosure/dp/B08J5J8C1H

I think it definitely makes sense for laptops because having a dedicated GPU is a huge sacrifice to portability for a laptop. I can never see wanting a separate tower for a GPU on my desktop PC though. And it's not like CPUs are getting smaller either. The Ryzen 7950X3D I got is huge and so is the fan for it.
 

Mist

Eeyore Enthusiast
<Gold Donor>
30,899
23,196
My guess is at some point they're going to ditch the current GPU design and you end up with two towers. One is your GPU tower and one is your CPU tower
Strong doubt.

I'd bet it moves in the opposite direction, with GPU and CPU using a shared pool of very fast memory.

I actually think the days of discrete cards and DIY PCs are numbered.

Longer term, I think monitors will have GPUs in them, and some kind of DisplayPort SLI-style protocol that allows workload distribution between an internal GPU and an external one built into the display interface. So maybe the internal GPU handles the basic rendering, and the display GPU handles all the AI interpolation and image enhancements like FSR/DLSS.
 

Furry

BROWN NOW
<Gold Donor>
20,950
26,872
Strong doubt.

I'd bet it moves in the opposite direction, with GPU and CPU using a shared pool of very fast memory.

I actually think the days of discrete cards and DIY PCs are numbered.
I'll move to china before I buy a prebuilt PC.
 

Tuco

I got Tuco'd!
<Gold Donor>
46,621
76,671
Strong doubt.

I'd bet it moves in the opposite direction, with GPU and CPU using a shared pool of very fast memory.

I actually think the days of discrete cards and DIY PCs are numbered.

Longer term, I think monitors will have GPUs in them, and some kind of DisplayPort SLI-style protocol that allows workload distribution between an internal GPU and an external one built into the display interface. So maybe the internal GPU handles the basic rendering, and the display GPU handles all the AI interpolation and image enhancements like FSR/DLSS.
I wonder how much memory bandwidth is actually consumed on my GPU. My RTX 4080 has 716GB/s which just seems like an absurdly high number, and the RTX 50x0 with GDD7 has twice that at 1.5TB/s. Conversely, my DD5 system memory has a bandwidth of like 90GB/s. I'm guessing bandwidth and response times are the bottleneck preventing GPUs from using system accessible memory, but how far is that gap?
 
Last edited:

Noodleface

A Mod Real Quick
38,150
14,859
I wonder how much memory bandwidth is actually consumed on my GPU. My RTX 4080 has 716GB/s which just seems like an absurdly high number, and the RTX 50x0 with GDD7 has twice that at 1.5TB/s. Conversely, my DD5 system memory has a bandwidth of like 90GB/s. I'm guessing bandwidth and response times are the bottleneck preventing GPUs from using system accessible memory, but how far is that gap?
There's nothing stopping a GPU from using system memory, it just doesn't really make sense when their own GDDR is so much faster.
 

Tuco

I got Tuco'd!
<Gold Donor>
46,621
76,671
There's nothing stopping a GPU from using system memory, it just doesn't really make sense when their own GDDR is so much faster.
Right. But if Mist's prediction is that future GPUs will use system memory, how big is the bandwidth / latency gap and how will they close it?