This article is curated from SIX articles as a quick primer on AI. Starting with glossary on AI, it delves into tacit knowledge as codified via ML going onto to understanding the difference between ML & AI. A quick peek into deep learning and challenges of explaining the patterns ending with a interesting piece AI written by an AI program
Continue reading15 categories of tooling for digital test automation
In this article I have tried picture(ise) the landscape of the plethora of tools for testing software which has moved away from just testing to build-test-deploy in a continuous manner. Keeping the interesting visual I have listed the FIFTEEN broad categories of tools that make up the modern digital testing landscape.
Continue readingWhat does it take to Build In Quality?
This article is a set of brilliant ideas curated from four articles with the first suggesting ten ways to build high quality into software,
Continue readingAutomation in isolation is more of a problem!
Automating test execution in isolation ends up being more of a problem than a solution. Any automation solution either to enhance quality and to improve test cycles should encompass the tasks across test discipline. Automation should be considered a lever to meet the business objectives and not an objective itself.
Continue readingDesign for Testability – An overview
T Ashok @ash_thiru
Summary
This outlines what testability, the background of testability from hardware, the economic value of DFT, why is testability important, design principles to enable testability and guidelines to ease testability of codebase. This draws upon five interesting articles on DFT and presents a quick overview of DFT.
Introduction
Software testability is the degree to which a software artefact (i.e. a software system, software module, requirements or design document) supports testing in a given test context. If the testability of the software artefact is high, then finding faults in the system (if it has any) by means of testing is easier.
The correlation of ‘testability’ to good design can be observed by seeing that code that has weak cohesion, tight coupling, redundancy and lack of encapsulation is difficult to test. A lower degree of testability results in increased test effort. In extreme cases a lack of testability may hinder testing parts of the software or software requirements at all.
(From [1] “Software testability” )
Testability is a product of effective communication between development, product, and testing teams. The more the ability to test is considered when creating the feature and the more other team members ask for the input of testers in this phase, the more effective testing will be.
(From [2] “Knowledge is Power When It Comes to Software Testability” )
Background
Design for Testability (DFT) is not a new concept. It has been used with electronic hardware design for over 50 years. if you want to be able to test an integrated circuit both during the design stage and later in production, you have to design it so that it can be tested. You have to put the hooks” in when you design it. You can’t simply add testability later, as the circuit is already in silicon; you can’t change it now.
DFT is a critical non-functional requirement that affects most every aspect of electronic hardware design. Similarly, complex agile software systems require testing both during design and production, and the same principles apply. You have to design your software for testability, else you won’t be able to test it when it’s done.
(From [3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe” )
The Economic Value of DFT
Agile testing covers two specific business perspectives: (1) enabling critiquing the product, minimising the impact of defects’ being delivered to the user. and (2) supporting iterative development by providing quick feedback within a continuous integration process.
These are hard to achieve if the system does not allow for simple system/component/unit-level testing. This implies that Agile programs, that sustain testability through every design decision, will enable the enterprise to achieve shorter runway for business and architectural epics. DFT helps reduce the impact of large system scope, and affords agile teams the luxury of working with something that is more manageable reducing the cost of delay in development by assuring assets developed are of high quality and needn’t be revisited.
(From [3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe“)
Why is testability important?
Testability impacts deliverability. When it’s easier for testers to locate issues, it gets debugged more quickly, and application gets to the user faster and without hidden glitches. By having higher testability, product/dev teams will benefit from faster feedback, enabling frequent fixes and iterations.
Shift-Left – Rather than waiting until test, having a whole-team approach to testability means giving your application thoughtful consideration during planning, design, and development, as well. This includes emphasising multiple facets such as documentation, logging, and requirements. The more knowledge a tester has of the product or feature, its purpose, and it’s expected behavior, the more valuable their testing and test results will be.
(From [2] “Knowledge is Power When It Comes to Software Testability” )
Exhaustive Testing
Exhaustive testing is practically better and easily achievable if applied in isolation for every component on all possible measures, this adds to its quality instead of trying to test the finished product with use-cases that tries to address all components. This raises another question, “Are all components testable” ? The answer is “build components highly testable as much as possible”.
However in addition to all these isolated tests an optimal system level test also should be carried out to ensure the End-To-End completeness.
Exhaustive testing is placing right set of tests at right levels i.e., more isolated tests and optimal system tests.
VGP
(From [4] “Designing the Software Testability” ]
“SOLID” design principles
Here are some principles and guidelines to can help you write easily-testable code, which is not only easier to test but also more flexible and maintainable, due to its better modularity.
(1) Single Responsibility Principle (SRP) – Each software module should only have one reason to change.
(2) Open/Closed Principle (OCP) – Classes should be open for extension but closed to modifications.
(3) Liskov Substitution Principle (LSP) – Objects of a superclass shall be replaceable with objects of its subclasses without breaking the application.
(4) Interface Segregation Principle (ISP)- No client should be forced to depend on methods it does not use
(5) Dependency Inversion Principle (DIP) – High-level modules should not depend on low-level modules; both should depend on abstractions. Abstractions should not depend on details. Details should depend upon abstractions.
[SOLID = SRP+OCP+LSP+ISP+DSP]
(From [5] “Writing Testable Code” )
Law of Demeter (LoD)
Another “law” which is useful for keeping the code decoupled and testable is the Law of Demeter. This principle states the following: Each unit should have only limited knowledge about other units: only units “closely” related to the current unit. Each unit should only talk to its friends; don’t talk to strangers. Only talk to your immediate friends.
(From [5] “Writing Testable Code” )
Guidelines to ease testability of codebase
(1) Make sure your code has seams – A seam is a place where you can alter behaviour in your program without editing that place.
(2) Don’t mix object creation with application logicHave two types of classes: application classes and factories. Application classes are those that do real work and have all the business logic while factories are used to create objects and respective dependencies.
(3) Use dependency injection
A class should not be responsible for fetching its dependencies, either by creating them, using global state (e.g. Singletons) or getting dependencies through other dependencies (breaking the Law of Demeter). Preferably, dependencies should be provided to the class through its constructor.
(4) Don’t use global state
Global state makes code more difficult to understand, as the user of those classes might not be aware of which variables need to be instantiated. It also makes tests more difficult to write due to the same reason and due to tests being able to influence each other, which is a potential source of flakiness.
(5) Avoid static methods
Static methods are procedural code and should be avoided in an object-oriented paradigm, as they don’t provide the seams required for unit testing.
(6) Favour composition over inheritance
Composition allows your code to better follow the Single Responsibility Principle, making code easy to test avoiding class number explosion. Composition provides more flexibility as the behaviour of the system is modelled by different interfaces that collaborate instead of creating a class hierarchy that distributes behaviour among business-domain classes via inheritance.
(From [5] “Writing Testable Code” )
References
[1] Software testability at https://en.wikipedia.org/wiki/Software_testability
[2] “Knowledge is Power When It Comes to Software Testability” https://smartbear.com/blog/test-and-monitor/knowledge-is-power-when-it-comes-to-software-testa/
[3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe” at https://www.scaledagileframework.com/design-for-testability-a-vital-aspect-of-the-system-architect-role-in-safe © Scaled Agile, Inc.
[4] “Designing the Software Testability” at https://medium.com/testengineering/designing-the-software-testability-2ef03c983955
[5] “Writing Testable Code at ” https://medium.com/feedzaitech/writing-testable-code-b3201d4538eb
Signup to receive SmartQA digest that has something interesting weekly to becoming smarter in QA and delivering great products.
Dissecting the human/machine test conundrum
In this article I dissect these the way we test as human and using machine and outline an interesting way how the role of human power and leverage of machines to do testing smartly, rapidly and super efficiently.
Continue readingIt takes right brain thinking to go beyond the left
Right brained creative thinking comes in handy to go beyond the left, to enable us to vary the paths, discover new paths and improving outcomes. Thinking creatively is about thinking visually, thinking contextually and thinking socially, using pictures to think spatially, using application context to react, experiment and question and then morphing into an end-user respectively.
Click here to read the full article published in Medium
Signup to receive SmartQA digest that has something interesting weekly to becoming smarter in QA and delivering great products.
Left brain thinking to building great code
A logical ‘left brain’ thinking is essential to good testing. Testing is not just an act, but an intellectual examination of what may be incorrect and how to perturb them effectively and efficiently. This can be seen as a collection of thinking styles of forward, backward and approximate using methods that can be well-formed techniques or high order principles that is based on an approach of disciplined process, good habits and learning from experiences.
Click here to read the full article published in Medium
Signup to receive SmartQA digest that has something interesting weekly to becoming smarter in QA and delivering great products.
High-performance thinking using the power of language
This is the first article in the series of twelve articles “XII Perspectives to High-Performance QA”, outlining interesting & counter-intuitive perspectives to high-performance QA aligned on four themes of Language, Thinking, Structure & Doing.
In this article under the ‘LANGUAGE’ theme, we examine how language helps in enabling a mindset of brilliant clarity to ‘High-Performance Thinking”. Here I outline how various styles of writing, various sentence constructs & sentence types play a key role in the activities we do, as a producer of brilliant code from the QA angle.
Click here to read the article published in Medium
Signup to receive SmartQA digest that has something interesting weekly to becoming smarter in QA and delivering great products.
15 Facets to Problem Solving
We use many terms like philosophy, mindset, framework, models, process, practice, techniques etc in SW dev/test. This article attempts to simplify and put together a nice image of how they all fit in, to enable clear thinking for brilliant problem solving.
Continue reading