Yes, but if they develop code intended for quality assurance, aren't they QA?
I would say there's still a need for some form of QA. What developer would own the testing system as a whole? Or do you think there wouldn't be a unified testing system? Who owns the system to collect and display test results? Who owns the actual machines the tests are running on and triages issues when tests fail?No. And I'm stating that the Developers who write the code should also be responsible for owning frameworks and test code for said code.
The only QA that should exist are not traditional QA. Performance Engineering and Security.
No. And I'm stating that the Developers who write the code should also be responsible for owning frameworks and test code for said code.
The only QA that should exist are not traditional QA. Performance Engineering and Security.
I didn't think it fell under performance either, I was just curious where most of what I do day-to-day(test automation) would.
At this point, I'm considering a combination of unit tests, spider sanity checks, and headless chrome. And probably some manual release testing. I really don't want to have to maintain selenium tests. Selenium is such a fucking nightmare once you start adding browser dimensions.
Will xpaths even help if they are using javascript and <canvas> to draw almost everything?So in theory you can have really nice Selenium suites. But it requires the devs to work with you and put static xpaths (or whatever you're using) everywhere. Which they will often not do because it is more work for them and some of the more funky tools just generate random shit all the time (Keno UI for example) and make it impossible to do that.
In that way I see what Vinen means but I wouldn't want to have to make selenium tests after I finished developing whatever... every single day. Fuck that noise.
Will xpaths even help if they are using javascript and <canvas> to draw almost everything?
At least in our setup, I feel like they should come up with the test strategy and the QA people would integrate into the test system, including how test results get back to the developers. From a developer perspective, is that too onerous/unfair?
We have our own homegrown data manager with a myriad of reports. How much developers peruse these varies. But the information on how much tests are failing/passing is there for them to see. And we have other forms of automation for attributing failures to certain bugs and then emailing the owners of bugs about new failures. This is the kind of work I think QA should be direct owners of.Not sure what you mean here. Are you telling me your office does not use Jira/TFS/ALM or some similar product? Or do you mean some visible kind of high level reporting to get test results to teams?
My whole department used my SSRS/Data sucking application for like 4 years. It pulled TFS data and reorganized it into some nice reports that you could auto-email out to project teams with native Sharepoint tools. I have done a lot of work when it comes to designing project reporting. If you're in need of ideas on that let me know.
Who does then? Who decides how your code is tested?I've never heard of any developers "coming up with the overall test strategy". I certainly haven't.
Anyone ever go to India for work trip? First time for me in a few days, 18 hour flight, good times. Director says "bring them candy" (?) ok then.
Nah, I didn't take it that way. I'm guessing that the amount of testing we do is above average, so your response is not flippant, more that it seems to be the norm for the industry. Whether that's good or not, idk. I've often felt that the amount of testing we do is borderline frivolous and has passed into deep diminishing returns a while ago.To be totally honest I don't give a fuck. My job, directly, is to complete user stories, i.e. "this should do this". When I think its done I mark it as complete and move it forward in jira. How it gets accepted is not my responsibility, even as a lead.
On my team stories get accepted ("tested") by the product owners and our lone offshore QA lead. If they fail UAT (breaks stuff), we discuss with the developer about the requirements and if they understood them and what went wrong. From what I can tell we have no overall testing strategy other than our QA guy attempting to make some stuff work in Cypress, maybe, IDK? I do mandate unit testing (and you can't push with any failing unit tests) so that's something but its certainly not comprehensive.
We may be "lucky" in that my team works on an internal product that isn't customer facing so we can get away with nonsense like this most likely. BTW please don't read the above as me being flippant or not caring about testing, I do think its important, but again its just someone else's problem, my job is to deliver features not make non unit test cases. This is ultimately a management problem, not us worker bees.