This will be an interesting question. Shit will happen and people will die, but in many cases a failure of the autonomy system will be due to either a software defect, poor tuning, sensor failure or bad design. Up to this point the scope of what those vehicles were trying to accomplish were narrow enough that a bad throttle pedal, a bad airbag design, improper floor mats etc were enough to cause a recall and settlements. But we're talking about systems that are expected to work 100% of the time. What about autonomy systems that are expected to work 99.99% of the time?
This happened recently:
Understanding the fatal Tesla accident on Autopilot and the NHTSA probe
So basically, in order to avoid a false positive and stop in the middle of the freeway for every low-hanging sign/billboard, their software classifies objects like that trailer as a billboard and now someone is dead. You know there's a dude who made that system, tuned the min-height of dangerous obstacles, and he fucked up and that's on him. As someone in that business that is terrifying.
So who is liable for that? I don't know. I don't know if the family of the deceased is going to sue Tesla, and we might only see that question get answered in the courts if they do and Tesla doesn't just immediately settle.
One of the big problems with autonomy is that people are way too willing to trust it. We're used to seeing humans learn something and once they nail it a few times they're good. Then users try out an autonomy system, see it stop for an obstacle cleanly a few times and they have absolute trust in it. There's always a huge learning experience when newbies see a catastrophic failure of well-tested autonomy systems happen in the field. Yeah, that thing you've watched perform great for 100 hours just went fucking bonkers, you'll never trust it again.