Conquering the Last Stronghold: Automating the Results of Automation

Konstantin Dinev / Monday, November 25, 2013

Almost every software company nowadays is developing automation for the software it produces. We're building software and we're automating it in parallel. We're also employing a continuous integration process in order to have our automation running on code that is being submitted/checked into our source control while we're developing new code. It's all good, but at some point the automation starts failing. Tests fail because the tests could be inconsistent, because of infrastructure failure or simply because certain functionality in our software has a defect. What do we do then? We analyze why the tests failed and we try to correct the failures.

The automation analysis is performed manually. A survey conducted among twenty two Quality Engineers (more details available at the presentation linked on the bottom of this post) showed that on average 20% of the total time they spend at work is dedicated to performing manual analysis. To put this in perspective: one day each week is entirely dedicated to manual analysis of results produced by the existing automation. One note that should be made here is that this survey is applicable to a mature software product. Products that are in early stages of their life-cycle do not require the same amount of time spent on automation analysis simply because they are still relatively small.


We hypothesized at Infragistics that the software development process can be made self-sustainable by closing the continuous integration cycle.This can be achieved through automating the analysis process on the results coming from the existing automated software testing infrastructure. The purpose of doing this is to allocated the time of the Quality Engineers on more efficiently. If the 20% currently spent on manual analysis is spent on additional automation instead, then we would end up with 25% better automation coverage. As a result the only thing left for us to do is software development and automation coverage for the software development we're doing. In order to achieve this we need to analyze the process that Quality Engineers go through when analyzing automation results. Here's a simple workflow showing that process:

This process has five states including a single entry point which we call "Failed Test", a single exit point called "Bug" and three intermediate steps each of which could lead to the exit point given that a certain condition is met. As you have already noticed this process can easily be turned into a basic state machine with five states. The rest of this post focuses primarily on the "Analyze Point of Failure and Identify the Responsible Party" state. We essentially want to create a consistent framework that automates bug submission given that we have failed tests.

In order to automatically submit bugs for failed tests we need to determine how to provide the appropriate information from the failed tests to the bugs. There are essential bug fields you need to provide regardless of the bug tracking system you may be using. Such fields are the bug title, the steps to reproduce and the expected and actual behaviors. Those can be extracted directly from the failed assert.


We still need a little more information in order to complete the bug submission. Such fields are the assigned to field, for example, and more additional metadata like area path and iteration. These we can extract from metadata associate with our tests that we otherwise use for other purposes or we don't utilize at all.


These should not lead you to think that automated bug submission is restricted to MSTest only. Those are just examples, but the actual implementation that we're going to talk about later in this post is not at all related to MSTest. It's entirely custom in order to show that this is applicable to mostly any environment that we may be using. It's important to remember that:

  • Concepts presented here are general and are entirely platform and framework independent.
  • Implementation is specific to the platform and the testing framework that is being utilized.


For the implementation I am going to show the following things are needed.

  1. Team Foundation Server (TFS) - manages the CI process
  2. QUnit - unit testing framework for JavaScript
  3. QUnit runner - automated runner for running and extracting the test results


You can download a trial of TFS from Microsoft in order to try out the demo. To set it up use the default project collection.


Then use either the TFS web access or visual studio to connect and setup a team project. The process for doing this is very well described in this article. Setup the project to use Scrum 2.2 template because the implementation of the bug submission framework uses this as a base implementation. If you like you can also setup e custom build template in order to execute the QUnit runner as part of the build. I would not show how to do that because this post is not about the CI process. A good article on how to do that can be found here.

So we setup a simple reporting class that will handle the bug submission for us. The idea behind it is to connect to our project collection and to the specific project that we've created and then to create or modify work items.

Code Snippet
  1. #region Private Members
  2.  
  3. private TfsTeamProjectCollection _projectCollection;
  4. private WorkItemStore _store;
  5. private Project _teamProject;
  6.  
  7. #endregion
  8.  
  9. #region Constructors
  10.  
  11. /// <summary>
  12. /// Initializes a new instance of the <see cref="ReportingCore"/> class with default connection.
  13. /// </summary>
  14. public ReportingCore()
  15. {
  16.     _projectCollection = new TfsTeamProjectCollection(new Uri("http://localhost:8080/tfs"));
  17.     _store = _projectCollection.GetService<WorkItemStore>();
  18.     _teamProject = _store.Projects["ISTA2013"];
  19.     TestRun = new List<GenericTestResult>();
  20. }
  21.  
  22. /// <summary>
  23. /// Initializes a new instance of the <see cref="ReportingCore"/> class.
  24. /// </summary>
  25. /// <param name="collectionUri">The TFS project collection URI.</param>
  26. /// <param name="projectName">Name of the TFS project.</param>
  27. public ReportingCore(Uri collectionUri, string projectName)
  28. {
  29.  
  30.     _projectCollection = new TfsTeamProjectCollection(collectionUri);
  31.     _store = _projectCollection.GetService<WorkItemStore>();
  32.     _teamProject = _store.Projects[projectName];
  33.     TestRun = new List<GenericTestResult>();
  34. }
  35.  
  36. #endregion


Then we need to populate some collection of failed tests that we're going to analyze and submit bugs for.

Code Snippet
  1. #region Properties
  2.  
  3. /// <summary>
  4. /// Gets or sets the list of failed tests from the test run.
  5. /// </summary>
  6. /// <value>
  7. /// The failed test run list.
  8. /// </value>
  9. public List<GenericTestResult> TestRun { get; set; }
  10.  
  11. #endregion
What is this GenericTestResult that we have there?

Code Snippet
  1. /// <summary>
  2. /// Generic test result class. Used to populate failed test results for analysis.
  3. /// </summary>
  4. [Serializable]
  5. public class GenericTestResult
  6. {
  7.     #region Properties
  8.  
  9.     /// <summary>
  10.     /// Gets or sets the test expected result.
  11.     /// </summary>
  12.     /// <value>
  13.     /// The expected result.
  14.     /// </value>
  15.     public string ExpectedResult { get; set; }
  16.  
  17.     /// <summary>
  18.     /// Gets or sets the test actual result.
  19.     /// </summary>
  20.     /// <value>
  21.     /// The actual result.
  22.     /// </value>
  23.     public string ActualResult { get; set; }
  24.  
  25.     /// <summary>
  26.     /// Gets or sets the test title.
  27.     /// </summary>
  28.     /// <value>
  29.     /// The title.
  30.     /// </value>
  31.     public string Title { get; set; }
  32.  
  33.     /// <summary>
  34.     /// Gets or sets the test owner.
  35.     /// </summary>
  36.     /// <value>
  37.     /// The owner.
  38.     /// </value>
  39.     public string Owner { get; set; }
  40.  
  41.     /// <summary>
  42.     /// Gets or sets the test file.
  43.     /// </summary>
  44.     /// <value>
  45.     /// The file attachment.
  46.     /// </value>
  47.     public string FileAttachment { get; set; }
  48.  
  49.     /// <summary>
  50.     /// Gets or sets the test description.
  51.     /// </summary>
  52.     /// <value>
  53.     /// The description.
  54.     /// </value>
  55.     public string Description { get; set; }
  56.  
  57.     #endregion
  58. }


Now we need to populate that list whenever we have a failed test in our test run.

Code Snippet
  1. _analysis.TestRun.Add(new GenericTestResult()
  2. {
  3.     Title = testName + " " + message,
  4.     ActualResult = actual,
  5.     FileAttachment = testUrl,
  6.     ExpectedResult = expected,
  7.     Description = message,
  8.     Owner = testConf.Owner
  9. });


Finally let's submit the result to our TFS.

Code Snippet
  1. #region Public Methods
  2.  
  3. /// <summary>
  4. /// Logs a bug.
  5. /// </summary>
  6. public void Submit()
  7. {
  8.     WorkItemType type = _teamProject.WorkItemTypes["Bug"];
  9.     foreach (GenericTestResult failedTest in TestRun)
  10.     {
  11.         failedTest.Title = failedTest.Title.Length < 256 ? failedTest.Title : failedTest.Title.Substring(0, 255);
  12.         WorkItemCollection bugs = _teamProject.Store.Query("SELECT [System.Id] FROM WorkItems WHERE [System.Title] = '" + failedTest.Title + "'");
  13.         if (bugs.Count > 0)
  14.         {
  15.             WorkItem bug = bugs[0];
  16.             this.HandleExistingBug(bug, failedTest);
  17.         }
  18.         else
  19.         {
  20.             WorkItem bug = new WorkItem(type);
  21.             this.HandleNewBug(bug, failedTest);
  22.         }
  23.     }
  24. }
  25.  
  26. /// <summary>
  27. /// Handles existing bug resubmission.
  28. /// </summary>
  29. /// <param name="bug">The bug.</param>
  30. /// <param name="failedTest">The failed test.</param>
  31. public virtual void HandleExistingBug(WorkItem bug, GenericTestResult failedTest)
  32. {
  33.     if (bug.State == "Done" || bug.State == "Removed")
  34.     {
  35.         bug.Open();
  36.         bug.State = "New";
  37.         bug[CoreField.AssignedTo] = failedTest.Owner;
  38.         bug["Repro Steps"] = failedTest.Description + "<br />Expected: " +
  39.             failedTest.ExpectedResult + "<br />Actual: " +
  40.             failedTest.ActualResult;
  41.         bug.AreaPath = failedTest.AreaPath;
  42.         bug.IterationPath = failedTest.IterationPath;
  43.         if (!string.IsNullOrEmpty(failedTest.FileAttachment))
  44.         {
  45.             bug.Attachments.Clear();
  46.             bug.Attachments.Add(new Attachment(failedTest.FileAttachment));
  47.         }
  48.         if (bug.IsValid())
  49.         {
  50.             bug.Save();
  51.         }
  52.         else
  53.         {
  54.             foreach (Field field in bug.Validate())
  55.             {
  56.                 Console.WriteLine(field.Name + " did not validated. Field value: " + field.Value);
  57.             }
  58.         }
  59.     }
  60. }
  61.  
  62. /// <summary>
  63. /// Handles new bug submission.
  64. /// </summary>
  65. /// <param name="bug">The bug.</param>
  66. /// <param name="failedTest">The failed test.</param>
  67. public virtual void HandleNewBug(WorkItem bug, GenericTestResult failedTest)
  68. {
  69.     bug.Title = failedTest.Title;
  70.     bug[CoreField.AssignedTo] = failedTest.Owner;
  71.     bug["Repro Steps"] = failedTest.Description + "<br />Expected: " +
  72.             failedTest.ExpectedResult + "<br />Actual: " +
  73.             failedTest.ActualResult;
  74.     bug.AreaPath = failedTest.AreaPath;
  75.     bug.IterationPath = failedTest.IterationPath;
  76.     if (!string.IsNullOrEmpty(failedTest.FileAttachment))
  77.     {
  78.         bug.Attachments.Add(new Attachment(failedTest.FileAttachment));
  79.     }
  80.     if (bug.IsValid())
  81.     {
  82.         bug.Save();
  83.     }
  84.     else
  85.     {
  86.         foreach (Field field in bug.Validate())
  87.         {
  88.             Console.WriteLine(field.Name + " did not validated. Field value: " + field.Value);
  89.         }
  90.     }
  91. }
  92.  
  93. #endregion


There methods for handling the bug submission are made virtual in order for those methods be be overridden for different work item templates. We aren't assuming that the Scrum 2.2 template is the only thing out there. In fact we use a custom work item template and we have overridden those method in order to handle that template.

We still need some code to test though. At Infragistics we test the Ignite UI product this way but for the purpose of this post I have created a very simple JavaScript "framework" called framework.js and a few QUnit tests for it called test1.html. The framework.js looks like this.

Code Snippet
  1. function Person(first, last, age) {
  2.     this.first = first;
  3.     this.last = last;
  4.     this.age = age;
  5.     this.getPropCount = function () {
  6.         var count = 0, prop;
  7.         for (prop in this) {
  8.             count++;
  9.         }
  10.         return count;
  11.     }
  12.     this.compareFirstName = function (first) {
  13.         return this.first === first;
  14.     }
  15. }


The test1.html tests for this "framework" look like this.

Code Snippet
  1. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
  2. <html xmlns="http://www.w3.org/1999/xhtml">
  3.  
  4. <head>
  5.     <meta charset="utf-8">
  6.     <title>QUnit Example</title>
  7.     <link rel="stylesheet" href="QUnit/qunit-1.12.0.css" />
  8.     <script type="text/javascript" src="QUnit/qunit-1.12.0.js"></script>
  9.     <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
  10.     <script type="text/javascript" src="../Source/framework.js"></script>
  11.     <script type="text/javascript">
  12.         $(document).ready(function () {
  13.             test("Test Person API", function () {
  14.                 var person = new Person("Konstantin", "Dinev");
  15.                 equal(person.getPropCount(), 5, "The prop count returned from getPropCount has an incorrect value.");
  16.                 equal(person.compareFirstName("Konstantin"), true, "The first name comparison method of the Person object failed.");
  17.             });
  18.         });
  19.     </script>
  20. </head>
  21.  
  22. <body>
  23.     <div id="qunit"></div>
  24.     <div id="qunit-fixture"></div>
  25. </body>
  26.  
  27. </html>


So if we run the test file now the results look like this.



If we add code to the framework.js file that causes any of these tests to be failing we would get a bug submitted for that. Example would be to add an API method to the prototype of the Person class in framework.

Code Snippet
  1. Person.prototype.getLastName = function () {
  2.     return this.last;
  3. }


We now have one failing test and our TFS would reflect this by creating and submitting a bug.


The additional meta information needed to create this work item is acquired through a configuration for the test runner. The configuration for this particular execution looks like this.

Code Snippet
  1. <?xml version="1.0" encoding="utf-8" ?>
  2. <QUnitRunnerConfiguration>
  3.   <TestsRunnerConfiguration>
  4.     <TestSuites>
  5.       <TestSuite>
  6.         <Name>ISTA 2013 Test-1</Name>
  7.         <Owner>Konstantin Dinev</Owner>
  8.         <FileSystemFolder>E:\ISTA2013\Tests</FileSystemFolder>
  9.         <TestsFileName>test1.html</TestsFileName>
  10.         <MappedServerUrl>Tests</MappedServerUrl>
  11.         <SuiteCoverageFiles>
  12.           <File>framework.js</File>
  13.         </SuiteCoverageFiles>
  14.       </TestSuite>
  15.     </TestSuites>
  16.   </TestsRunnerConfiguration>
  17.   <TestsResultDispatcherConfiguration>
  18.     <MailServer>mail.test.com</MailServer>
  19.     <Sender>Automated Test Runner</Sender>
  20.     <Recipients>
  21.       <Email>email@test.com</Email>
  22.     </Recipients>
  23.     <Subject>ISTA 2013 Results</Subject>
  24.   </TestsResultDispatcherConfiguration>
  25. </QUnitRunnerConfiguration>


So that was the example implementation. You will find the runner attached with instruction on how to use it. Further should also look at potential problems that we might experience when adding such automated bug submission framework on top of our testing framework. What could go wrong? Well quite a few things!

            Problem: There could be multiple bug submissions coming from the same issue. If a large number of tests could be running with the infrastructure and a single commit/changeset may cause a number of existing tests to fail. What would prevent an automated bug submission framework from submitting a bug for every failed test.
            Solution: The tests should be analyzed as a batch instead of performing analysis on a single test at a time. This allows for identification of such issues.


            Problem: The person receiving the submitted bug still needs to perform manual analysis.
            Solution: Meaningful error messages must be provided with the test. The error message when an assert fails is the thing that we extract most information from. If that message is generic or is missing then we are increasing the time needed for analyzing the existing bug before fixing it.Also analysis is part of the bug-fixing process even now. Regardless of what analysis the Quality Engineer performed, the person working on the bug still needs to analyze the issue. In this sense we're still saving time.


            Problem: Developers require detailed information about the bug. The person responsible for fixing a bug usually asks for additional things like a stack trace.
            Solution: The infrastructure can submit any information provided by the IDE. Anything that the developer asks for and we extract using the IDE can be extracted and provided with the bug.


            Problem: Inconsistent tests. There are a lot of existing tests but some of them are inconsistent. They randomly fail.
            Solution: Test consistency analysis. This is currently not implemented with the framework but we have a very clear idea of how this problem can be handled. The failing tests need to be repeated. If they are rerun enough many times (definition of enough many depends on the implementation and on the tests and the tested software) and they show deltas in some of executions then they are inconsistent. If all the runs show the same results then they are consistent. This is a large topic by itself so I won't go into further detail.

As a conclusion we've examined the concepts for creating automated analysis of our automation executions and we've created a basic framework, proof of concept if you will, showing that these concepts are applicable. The expectation is for us to save at least some, if not all, of the time spent doing manual analysis of automation results. As I have already mentioned previously this framework is applicable and is being used already. Feel free to apply these concepts for the automation of your projects and hopefully it would prove useful. Feel free to send me any comments or questions!

Link to PowerPoint presentation on the topic.

https://skydrive.live.com/redir?resid=D246401813239316!3659&authkey=!AGuBGJNl68GosuQ&ithint=file%2c.zip