There are limits, both in theory on how compressed something can get and on CPU power to calculate the compression. It usually becomes a balancing act, but new faster compressions are constantly being developed. The thing is, they are far from universal because the better algorithms require knowledge of the format of the content. So something designed for text will be MEH for video, where as something like h265 knows about the concept of frames and how they relate and can do cross frame calculations. There is also the lossy (h265,mpeg,mp3,etc) vs lossless(LZO,LZ4,Snappy,zlib,deflate,bzip2,etc). I work in the database world where faster and quality compressions can make a big difference, they end up being hard to monetize though as noone wants to pay licensing for them and thus most end up OSS. We default to LZ4 which is fast like snappy and has good ratios, in fact the cost of an algorithm like LZ4 is so small that the io and other savings results in higher performance for most cases.