Modern day vs does have Database projects which effectively gives u source control for databases. Now for Kane's point doing a loop on code one do record at a time vs doing a loop on code on a multi set from the db, is debatable. For example, it is a lot easier to code a process that handles one record at a time and runs it thru a business process, than to run all the data at once en then run it through the process in chunks. Now the one row at a time will have severe issues if your data gets to be too big, or if you have a lot of access. So it is sort of the problem you experience only at high end enterprise level. And this problem can be solved with more db servers and distributed readings.
For most of the business processes, the information does not come from a single table. But instead from multiple tables/ views, etc. trying to load all the data you need at once may be not realistic since the amount is too much, so having unnecessary data on memory, that has to be passed around somehow by different objects is way more complex code wise, than load a single record at a time and process it. Passing huge chunks of data is memory in programming is not recommended.
For example consider the following process. Nightly your system must grab all orders entered that day, contact Google, based on the shipping address, determine the nearest by post office, contact ups and ship the order to the post office, and send the email to the client. Now it is semi simple to just load all open orders, then loop in code and for each loop, load the shipping address, contact Google, contact ups, load customer email and send email. Now imagine that one in set processing, it becomes more complex to load all the necessary components prior in memory.