Follow Datanami:
May 6, 2016

What Lies Beneath Development

Jonathan Alexander

(ChristianPop/Shutterstock)

The world is all abuzz talking about big data and using analytics to help improve business results. Ironically, software teams rarely use data to analyze and improve their own processes. Many internal software teams today don’t track fairly simple metrics including: how often code is submitted, by whom and the quality impact. By monitoring basic activities and others, software teams can identify patterns of behavior and best practices. That’s not to say that engineering managers don’t see the need for improvement.

Software teams are under ever-tight deadlines, often operating with less than the ideal number of team members or lacking enough of the needed skill sets. Testers, in particular, are under-represented in many organizations. Most software leaders know that efficiency is the only way to get ahead—or even, stay afloat. The catch is, no one has the time to measure and analyze software development performance. Yet by doing so, teams could determine where to best streamline their efforts and ultimately, gain ample payback in time savings, risk reduction and happier, more loyal customers.

There are a few barriers to the metrics-oriented approach to development. First, while there are an ever-growing number of tools and platforms which do a good job of collecting software lifecycle data, such as Github, Jenkins and JIRA along with testing platforms like qTest, they are all silos of data. There is no commercial BI tool for software development that can integrate many sources of data and provide easy-to-digest analysis. Compilation of data and analysis must be done manually, and no one has time.

Secondly, and perhaps more troubling, is a general disdain for applying data metrics to software development. This fear of metrics is common in industries where data analysis hasn’t driven decisions in the past. Take the sports industry, which has been dramatically affected by the analytical tactics portrayed in Moneyball, the film about how data transformed the Oakland A’s baseball team. Today, data analysis is increasingly accepted in professional sports management and in many other businesses, not as the sole solution but as a key component in helping leaders make better decisions.

By starting with a few small metrics that can empower meaningful change, development teams will likely change their thinking around the usefulness and importance of analysis for their work processes. For instance, a mere 2% productivity improvement equates to 10 minutes in time savings per eight-hour workday, which translates to an “extra” week of coding (or the ability to take a vacation) for every developer each year.

Getting Started: Finding Patterns That Indicate Potential Problems

(McIek/Shutterstock)

 (McIek/Shutterstock)

Software organizations that care about measuring often look at a few metrics related to productivity and defects. They might use agile project management tools like JIRA or VersionOne to view burndown charts, which cover the tasks the developer was assigned and how many they finished over a period of time. They might also look at defect metrics from bug-tracking systems, which survey the number and severity of issues per release.  While these data elements are useful at a high-level for future planning and overall quality assessment purposes, they lack enough detail to be truly useful in identifying opportunities for process improvement.

Examples of better metrics that might indicate problem patterns and opportunities for improvement include:

  • Bugs-per-line-changed for each source code module: Track the number of lines changed for each source code module over time and identify the module for each bug found; merge these to calculate the bugs-per-line-changed which can identify modules that show a higher frequency of problems.
  • Bugs-per-line-changed per developer: This is achieved as above but replacing source code with module with individual developers.
  • Average daily check-ins and check-ins-per-bug for each developer: Track the number of check-ins for each developer and determine how often a new bug is found and assigned to each developer. This can t identify patterns of behavior that lead to better or worse results (for example if developers with more frequent check-ins have fewer bugs)

Going Deeper: Improving How We Work

With time and a little more effort, software teams can use data to develop and reinforce best practices. Here are a few ideas:

  1. Test risk analysis: As mentioned earlier, testers are always in short supply on development teams. To boot, there’s just not enough time to test everything all the time. Therefore, it’s important to understand which tests are critical and which ones could be skipped. To get this detail, you can look at historical records on bug counts per section of code, even per developer, to see the trends. Then, you would compare that bug report with your test coverage to ensure that you’ve got enough testing applied to those high risk areas. You might be missing tests altogether – after analysis shows that you have high incidence of bugs in an untested or less-tested feature set.application development lifecycle management
  2. Defects per test case: Another way to analyze testing efficacy is to look at defects per manual test case for each feature or project component. If one component has extremely low defects per test case, such as one in 1000, you might be spending too much time testing that area of the code, or at least the test set might be streamlined or automated. Meanwhile, in another area which is generating 3 in 10 defects, apply more manual or exploratory testing effort; the high frequency might indicate higher risk that further bugs are yet uncovered.
  3. “Stale” source code: Code that hasn’t been touched in a long time in the repository is dangerous. If a change is needed on that code, the developer knowledge may not be fresh, which often means increased risk that something will be missed or misunderstood, and bugs will result. Identifying the frequency and time lapses in source code module changes and then using that data to inform the testing process as to potential areas of risk is useful for improving risk-assessment and test planning. If a stale source code module is changed, it probably needs to be tested more carefully. This knowledge could also possibly lead to some new processes, such as requiring all source code to be reviewed on a regular basis.

This is just a starting list. Beyond code-specific metrics, teams should also consider how to measure collaboration: the hallmark of Agile and DevOps success. If you are already using online collaboration and chat tools, you can generate reports showing which team members collaborate the most and recognize or reward them accordingly. Another dimension of useful data might be obtained through user surveys or trouble ticketing systems, so you can understand how customers are responding to the latest features that your team is pushing out.

Gaining value from tracking metrics doesn’t require a high-powered big data engine that spits out beautiful graphics with drill-down analysis. There is much value in simply sharing the raw data with your team. Development and test engineers are smart people; they will make associations about the data on their own, creating opportunities for improvement and useful benefits for everyone. Don’t get hung up on the process or worry if you’ve got the right data. Chances are, within your existing bug tracking, project management, test management, source code and customer support systems, you’ve got a lot you can use to start measuring activity and identifying patterns in a meaningful way, right now.

 

Jonathan Alexander

About the author: Jonathan Alexander is Chief Technology Officer at QASymphony, where he leads the research and development of the qTest platform, including the development of new product offerings. Prior to QASymphony, Jonathan was the CTO at Vonage Business Solutions (formerly Vocalocity), a leader in cloud-based business communications. He has successfully led the development of teams and award-winning software at 3 start-ups, Vocalocity (acquired by Vonage), vmSight (acquired by Liquidware Labs), and Radnet (technology and team acquired by Epiphany) and also served in post-acquisition executive roles at Liquidware Labs and Epiphany. He studied computer science at UCLA, and began his career writing software for author Michael Crichton. He holds multiple software patents, and he is the author of Codermetrics published by O’Reilly in 2011.

 

Related Items:

How Machine Learning Is Eating the Software World

Software Development Strategies for the Age of Data

Datanami