A beautiful book that taught me “Approximate thinking” of how to rapidly approximate and get facts to analyse further.Continue reading
In the beEnriched section is an interesting article “Design for Testability- An Overview”Continue reading
Question: In this age of Automation and AI what do you believe is the role of human intellect for QA?
Arun – I always maintain that analytics is a platform, AI or ML is a platform that is going to enable humans to make decisions. For example, there are already models that can predict based on looking at X-rays, the propensity of somebody having cancer for instance, but would we completely stop using human intellect? I think that would be a mistake, in any field. A recent case in point is the air crash that took place in Ethiopia, where the plane completely controlled by an algorithm. If only the humans had disengaged this, the crash may have been averted. A recent Twitter spat between Elon Musk and Mark Zuckerberg was if AI will be beneficial or pose an ethical issue. Well, I am the side of Elon Musk, while Zuckerberg has a very rosy vision, which I don’t think it is at all.
I grew up reading Asimov, the robot series and the three laws of robotics got into me when I was a kid. In the book, those laws of robotics were circumvented in very unique ways in certain circumstances. I read that Google is starting to think about the ethics of AI, which means you do not only build in the ethics programmatically but also have the human override. While I am all for AI helping testing, I think it still is a role for the human intellect. It might sound a little wishy-washy but I think you still have to ensure that human intellect has a veto power so that you can shut off the AI switch if you think it is isn’t telling, if it can be catastrophic.
I think that fear is real, I don’t think a lot of people realise how soon we’re going to lose many jobs, people relate to the industrial revolution. When the automobiles came, the guys who were shoveling horse manure moved into the production line, but that’s very different because the training cost for that was very minimal. To train somebody to be an AI expert is not easy. It’s not going to happen. So what do we do, if we move away from testing?
I think that fear is real, all I’m saying is if you think about whether it can be completely divorced from human intellect, and the ability of humans to influence what the final outcome should be, we are a little far from that. Not saying it won’t happen, but we are a little far, I think.
Signup for fresh content on SmartQA delivered to you every week.