top of page
Writer's picturetakery reddy

Automation Testing companies

Software testing is approaching its end of the Cretaceous period. Personally, I invite asteroid maxims to promote a destructive approach to the practice of software testing that can continue along the evolutionary journey is necessary. Make no mistake, software testing has not really stagnant; it evolved during the Cretaceous period of his. The biggest change was at the top of the food chain testing, as developers evolved to accept more responsibility for the quality of the software. This distribution of responsibilities of quality is an important stepping stone for the next industrial evolutionary leaps.

The evolution of software testing has - compared to other technologies - slowly. If you agree that the software testing as a practice has been sluggish, then we need to take a step back and ask: "Why are we in this situation?"

Two main reasons have not developed software testing

I believe that there are two main reasons why software testing has not evolved: the organization was handcuffed by a global system integrators (GSIS) and the test has had a muddled organizational structure.

Between the two, which is the chicken and which the egg? If the quality of the software has a strong reporting hierarchy GSIS can exert so much control? Does the GSIS abusing their position and managed to silence internal opposition? I had my guess, but I would love to hear what you think.

Handcuffed by GSIS

Let us start this discussion with the GSIS for a significantly more incendiary topic. The general concept here is that senior managers are traded, internal domestic expertise in business and testing processes to offshore labor, reducing OpenX. Known as labor arbitrage, organizations can reduce the number of employees and transfer the responsibility for software testing outsourcing resource for soldiers trained on the software testing task. There were three main adverse impacts on software testing with a shift to the GSIS: promoted models guide the implementation of tasks, the adoption of automation of business processes absent and there is a "brain-drain" or sewer knowledge.

Given the relatively lower costs of labor (an average of 2.5: 1), GSI models especially structured and executed the task manually. The GSIS paints a picture of an endless supply of technical resources clamoring to work 24/7 in comparison with the resources to settle in the country. It conjured images of the secretarial pool (without iPhone) hammering away in the test plan at 60% of current expenditures. With the abundance of human resources is really no incentive to promote automation. As for domestic operations, the cost of which is contained for the time being derived from software testing becomes a strategic task.

It is clear, but that needs to be highlighted, that the preferred execution model of GSI manual also absent task automation efforts. Why? In the model GSI, automation potentially eliminating headcount and reduce testing cycles. Less headcount plus reduced cycle time equivalent to fewer billable hours and reduce income in time and material models. Therefore, the benefits of automation will certainly not serve the financial goals of GSI. In addition, if the automation is recommended for your service provider, then GSI suggests that they build for you. All GSIS today sitting on millions of lines of dead code that represents the effort to build one-off project automation. This dead code also represents millions of dollars in billable hours.

Perhaps the greatest impact on the evolution of software testing is a business and the process of brain drain. With lower OpEx as bait, market global software testing services to swell to $ 32 billion dollars per year (the "B-Billion"). This tectonic shift drained resources that have in-depth knowledge of the business processes of domestic organizations. These nets brain-drain is less impacted by the results of the testing activities. What is my evidence?

The severe swelling test suite

There is no concept of risk or priorities in the test suite

Metric driven by a count tests

false-positive rate> 80%

Abandoned test suite because the code is too far out of sync with the test

There are more but this is too depressing ...

Let me be very open about my opinion about this. Trade organization process control to lower costs. In the post-Y2K world, this seemed like a pretty good idea because the software primarily serves the operational objectives. The current software is the primary interface for business and every aspect of delivery should be considered as a core competency.

Tests have had a chaotic organizational structure

Testing has historically been reported to the development team and this is a big mistake. Testing should always be reported operating. I can not think of one reason why testing does not have to resort to surgery. Even if the test did a report on the operation then I believe the practice of software testing will be at a significantly different evolution. Let's play this out a little concept. What if the practice of software testing landed with surgery instead of development? I think we will look at three main areas: the more rapid adoption of development testing practices advanced end-to-end test automation, and focus on business risk.

If the software testing team has historically reported operating there will be (even) more tension between Dev and Ops. This tension will be promoted the need for more rigorous testing in the construction by the developer. The modern form of testing the software (and the tension between developers and testers) evolved out of lack of diligent testing by developers. Practices such as static analysis, structural analysis, early performance testing, and unit testing are slowly matured over the past decade.

If the software testing team reported the operation, software testing will become one of the front-line duty in the ITIL processes, versus smaller validation tasks. The speed will come to light earlier as a business objective, therefore, promote the adoption of advanced automation techniques. I realized that my statement above is loaded with some strong allegations but contains several core drivers DevOps - so do not hesitate to comment. With production speeds become a more prominent destination, there will be better access to production data, better access to environmental data, and a more cohesive approach to the application life cycle and not just the software development life cycle. Automation will become a necessity and not an alternative to outsourcing.

With the reporting software testing for the operation, I believe KPIs and metrics to drive activity will be different. Metrics such as the count of the test and the percentage of tests executed will never leak into the dashboard. I believe we will evolve metrics more closely aligned with the business risk and will be a model that allows organizations to more reliably assess the risks associated with the release of the software at any point in the development cycle develops.

Now I'm depressed, but energy

We are in a unique time enough in the evolution of software testing. We are facing new challenges associated with working from home. We are facing unprecedented pressure from the digital transformation initiative. Speed​​is the new mantra for testing software but penalties for failure software is at an all-time high because of the news blackout and frustrated end-users went viral on social media? Now is the time to rethink the whole process. I will share some ideas in my next article on software testing of natural selection.

6 views0 comments

Recent Posts

See All

댓글


bottom of page