the lack of reproduction of studies has many explanations. in part I believe generally there's simply no glory in reproducing others results, it takes a lot of time and won't get you any publication to speak of, which is really all research groups are about. today it's all about "publish or perish". there's basically no economic or reputable incentive to waste time reproducing other peoples results, when the experiments themselves can cost 10s of thousands of dollars, disregarding the man hours. even if you prove them wrong, a negative study hardly gets any traction. all the attention goes to new innovative studies that build upon previous data, even if it is faulty, which normally takes way too much time to go back and prove, when the whip is to publish.
As a grad student, I tried to reproduce some interesting studies I had read about, with the intent of adapting them to a system I was developing. My attempts utterly failed 2-3 times until I got in touch with the authors, who gave me some very important details about their methodology that were critical for the system to work properly. Once I implemented those methods, I was able to reproduce their work (though it was never quite as good as what their publication showed!).
What this means is that many papers will have a methods section that reads "A and B were reacted for 48 hours at 60C" when in reality, you need to add B dropwise to A while stirring at exactly 600 RPM under nitrogen purge. In the end, it's almost impossible to write a "readable" methods section that also has step-by-step instructions for reproducibility, so you usually end up with something in between. I know that one of the tenets of scientific publication is that it should be reproducible, but it's actually quite difficult to implement in practice. If I included truly reproducible methods in my papers, the methods section would be 15 pages long and nobody would ever read it!
That being said, there is also a lot of built-in bias (no motivation to publish negative results, getting lucky on your first try and publishing it without reproducing it, improper experimental design) and some amount of outright dishonesty (playing with your stats analysis to achieve significance, running an experiment 100 times and only publishing the positive data, or straight up falsifying data).
And I completely agree with the issue that there's often no incentive to reproduce data UNLESS you have a reason to doubt their data in the first place ("it's too good to be true!") or unless there is follow-up work that requires the published methods.
However, I think it's a difficult problem to solve, because I view most scientific publications as "hypothesis-generating", and if an idea has enough merit, it is bound to be reproduced and thoroughly tested anyways. In this way, publication requirements are essentially filters that try to limit the amount of "bad" research that gets published (ie, tries to reduce type II error), without filtering out potentially good ideas (type I error). Again, I think most scientists view publications as a bunch of "potentially good ideas" but they are also trained to evaluate them with a critical eye, whereas the public, and especially the media, view publications as "statements of fact", and that is where the danger lies.