GTAC 2014: Presentations

All of the GTAC 2014 video recordings and slides are publicly available. You can watch them from the GTAC 2014 YouTube playlist or browse the talks below:

Opening Remarks

Sonal Shah (Google)

Opening Keynote - Move Fast & Don't Break Things

Ankit Mehta (Google)

Links: Video, Slides

Automation for a Better Web

James Graham (Mozilla)

The web is the world's most popular application platform, yet poor browser interoperability is an all-too-common cause of dismay and frustration amongst web developers. In order to try improve this situation the W3C has been facilitating a community effort to produce a continually updated, cross-browser, testsuite for the open web; the web-platform-tests. In this talk James will introduce the web-platform-tests and describe the tools we have created for driving automation of the tests across a range of desktop browsers, and on mobile devices running Firefox OS. He will show how this software has been designed to meet the challenges of running an externally-sourced, frequently updated, testsuite on hundreds of commits a day in Mozilla's continuous integration system.

Links: Video, Slides

Make Chrome the best mobile browser

Karin Lundberg (Google)

One of the reasons for Chrome’s success has been its core principles of speed, stability, simplicity and security (the 4 S’s). When we released Chrome for Android and iOS, we not only applied the 4 S’s to the browser itself but also to how we do automated testing and the kind of tests we run:

  • Speed is for performance testing and fast tests.
  • Stability is for stability testing and stable tests.
  • Simplicity is for testing that Chrome has a simple user experience and for making it simple to add and run tests.
  • Security is for security testing.

Links: Video, Slides

A Test Automation Language for Behavioral Models

Nan Li (Medidata Solutions)

Model-based testers design abstract tests in terms of models such as paths in graphs. Then the abstract tests need to be converted to concrete tests, which are defined in terms of implementation. The transformation from abstract tests to concrete tests have to be automated. Existing model-based testing techniques for behavioral models use many additional diagrams such as class diagrams and use case diagrams for test transformation and generation. They are very complicated to use in practice because testers have to make all related diagrams consistent all the time even when requirements are changed frequently.

This talk introduces a test automation language to allow testers to generate tests by using only one behavioral model such as a state machine diagram. Three issues will be addressed: (1) creating mappings from models to executable test code and generating test values, (2) transforming graphs and using coverage criteria to generate test paths, and (3) solving constraints and generating concrete tests.

Links: Video, Slides

Test coverage at Google

Andrei Chirila (Google)

Did you ever wonder how testing at Google looks like? What tools we use to help us out and how do we measure and act on test coverage? We will briefly describe the development process at Google, then focus on use of code coverage measurement and how we use code coverage to improve code quality and engineering productivity. In the end, we'll present the vast amount of coverage data, spanning more than 100.000 commits, we have collected and some more widely applicable conclusions we have reached.

Links: Video, Slides

CATJS: Applications That Test Themselves

Ran Snir (HP) and Lior Reuven (HP)

In the past years we have seen many anomalies that have changed the way we think about the computing world. There are 3d printers that print 3d printers, robots that think by themselves and then we have catjs.

catjs is an Open Source framework that adds the ability for mobile-web apps to test themselves. Simple annotations in your HTML5 code will be translated to embedded tests scripts within the application’s lifecycle. These mobile-web tests can run on any device, operating system and browser. catjs is a quick and easy way to take care of your application’s testing flow.

Links: Video, Slides

Scalable Continuous Integration - Using Open Source

Vishal Arora (Dropbox)

Many open source tools are available for continuous integration (CI). Only a few operate well at large scale. And almost none are built to scale in a distributed environment. Come find out the challenges of implementing CI at scale, and one way to put together open source pieces to quickly build your own distributed, scalable CI system.

Links: Video, Slides

I Don't Test Often ... But When I Do, I Test in Production

Gareth Bowles (Netflix)

Every day, Netflix has more customers consuming more content on an increasing number of client devices. We're also constantly innovating to improve our customers' experience. Testing in such a rapidly changing environment is a huge challenge, and we've concluded that running tests in our production environment can often be the most efficient way to validate those changes. This talk will cover three test methods that we use in production: simulating all kinds of outages with the Simian Army, looking for regressions using canaries, and measuring test effectiveness with code coverage analysis from production.

Links: Video, Slides

The Importance of Automated Testing on Real and Virtual Mobile Devices

Jay Srinivasan (Google) and Manish Lachwani (Google)

Compared to the web world, mobile testing is a minefield. From different devices, operating systems, networks and locations, there are seemingly an infinite number of variables developers must account for. In this educational session, we will discuss some of the unique challenges that come with optimizing the performance and quality of mobile apps, and strategies for addressing them, including the need for automation, real devices and real user conditions.

Links: Video, Slides

Free Tests Are Better Than Free Bananas: Using Data Mining and Machine Learning To Automate Real-Time Production Monitoring

Celal Ziftci (Google)

There is growing interest in leveraging data mining and machine learning techniques in the analysis, maintenance and testing of software systems. In this talk, Celal will discuss how we use such techniques to automatically mine system invariants, use those invariants in monitoring our systems in real-time and alert engineers of any potential production problems within minutes.

The talk will consist of two tools we internally use, and how we combine them to provide real-time production monitoring for engineers almost for free:

  1. A tool that can mine system invariants.
  2. A tool that monitors production systems, and uses the first tool to automatically generate part of the logic it uses to identify potential problems in real-time.

Links: Video, Slides

Test Automation on an Infrared Set-top Box

Olivier Etienne (Orange)

This talk will explain what a TV App context is and the kind of problems we can encounter when trying to automate the things out. Olivier will go through previous failures, their approach and what have been the key points to build an automatic test tool. If time permits, he will go deeper in the details of the implementation.

Come listen how a few solders and some lines of code have opened the rich world of web testing to a set-top box.

Links: Video, Slides

The Challenge of Fairly Comparing Cloud Providers and What We're Doing About It

Anthony Voellm (Google)

This talk will cover the history of benchmarking from mainframe to Cloud. The goal is to lay a foundation around where benchmarks started and how they have gotten to where they are. Ideas will be laid out for the future of benchmarking Cloud and how we can do it practically.

Links: Video, Slides

Never Send a Human to do a Machine’s Job: How Facebook uses bots to manage tests

Roy Williams (Facebook)

Facebook doesn’t have a test organization, developers own everything from writing their code to testing it to shepherding it into production. That doesn’t mean we don’t test! The way that we’ve made this scale has been through automating the lifecycle of tests to keep signal high and noise low. New tests are considered untrusted and flakiness is quickly flushed out of the tree. We’ll be talking about what’s worked and what hasn’t to build trust in tests.

Links: Video, Slides

Espresso, Spoon, Wiremock, Oh my! ( or how I learned to stop worrying and love Android testing )

Michael Bailey (American Express)

Learn about creating and executing fast and reliable automated Android UI tests. Tools will include Espresso, Spoon, Wiremock and Jenkins. Basic Android and Java development knowledge is assumed.

Links: Video, Slides

Google BigQuery Analytics

Brian Vance (Google)

BigQuery is Google Cloud's interactive big data service. Users can analyze terabytes of data in a matter seconds through SQL-like queries. It is built on top of Dremel, which Google testers have been using internally for years. We will walk through a couple examples, and show you how you can get started with BigQuery.

Links: Video, Slides

Selendroid - Selenium for Android

Dominik Dary (Adobe)

Selendroid is an open source test automation framework which drives off the UI of Android native and hybrid applications and the mobile web. Tests are written using the Selenium 2 client API. For testing no modification of app under test is required in order to automate it.

This presentation demonstrates to the audience how easy it is to do mobile test automation. It shows how Selendroid can be used to test native and hybrid Android apps and how the Selenium Grid can be used for parallel testing on multiple devices. Advances topics like extending Selendroid itself at runtime and doing cross platform tests will be covered as well.

Links: Video, Slides

Maintaining Sanity In A Hypermedia World

Amit Easow (Comcast)

As Comcast has evolved from being a cable company to a media and technology leader, the engineering teams have also gotten smarter. When Amit joined Comcast Interactive Media (CIM) in 2006, they were a manual-testing shop. After they shipped their first website in 2007, he started creating prototypes for an automated UI-testing infrastructure. He was introduced to Selenium at GTAC 2008 and then returned to Comcast to build an automated testing infrastructure with Selenium Grid, Hudson and Subversion. Today, he works on API testing with deployments to Production every weekday. This is made possible with Python, Git, Gerrit and Anthill.

Links: Video, Slides

Fire Away Sooner And Faster With MSL!

Bryan Robbins (FINRA) and Daniel Koo (FINRA)

Delivering software faster without compromising quality is not a trivial task. We all desire to move at speed by developing tests early and executing tests faster, with a minimal maintenance footprint. At FINRA, we developed MSL (pronounced "Missile") to enable Agile teams leveraging layered architectures such as MVC to test their UI code sooner and faster in isolation.

MSL supports integration testing of UI code (such as Javascript, HTML, CSS) by deploying locally on a Node.js server and configuring mock HTTP responses from test code using one of our clients (Java, Javascript, or Node.js). This talk will introduce key features of MSL with a few examples.

Links: Video, Slides

The Testing User Experience

Alex Eagle (Google)

Google's products release frequently, and that requires significant automated testing and "build-copping". We are now working to offer our testing infrastructure as part of Google Cloud Platform. This talk will discuss some of the methodology we use to keep our builds green and our products defect-free, and give a preview of how we are exposing this to the world.

Links: Video, Slides

Round Table Talk 1 - Mobile Cross-Platform Testing

Links: Video, Slides

Round Table Talk 2 - Document Automation Coverage

Links: Video, Slides

Impact of Community Structure on SAT Solver Performance

Zack Newsham (University of Waterloo)

Modern CDCL SAT solvers routinely solve very large in- dustrial SAT instances in relatively short periods of time. It is clear that these solvers somehow exploit the structure of real-world instances. How- ever, to-date there have been few results that precisely characterise this structure. In this paper, we provide evidence that the community struc- ture of real-world SAT instances is correlated with the running time of CDCL SAT solvers. It has been known for some time that real-world SAT instances, viewed as graphs, have natural communities in them. A community is a sub-graph of the graph of a SAT instance, such that this sub-graph has more internal edges than outgoing to the rest of the graph. The community structure of a graph is often characterised by a quality metric called Q. Intuitively, a graph with high-quality community struc- ture (high Q) is easily separableinto smaller communities, while the one with low Q is not. We provide three results based on empirical data which show that community structure of real-world industrial instances is a bet- ter predictor of the running time of CDCL solvers than other commonly considered factors such as variables and clauses. First, we show that there is a strong correlation between the Q value and Literal Block Distance metric of quality of conflict clauses used in clause-deletion policies in Glucose-like solvers. Second, using regression analysis, we show that the the number of communities and the Q value of the graph of real-world SAT instances is more predictive of the running time of CDCL solvers than traditional metrics like number of variables or clauses. Finally, we show that randomly-generated SAT instances with 0.05 ≤ Q ≤ 0.13 are dramatically harder to solve for CDCL solvers than otherwise.

Links: Video, Slides

Beyond Coverage: What Lurks in Test Suites?

Patrick Lam (University of Waterloo)

We all want "better" test suites. But what makes for a good test suite? Certainly, test suites ought to aim for good coverage, at least at the statement coverage level. To be useful, test suites should run quickly enough to provide timely feedback.

This talk will investigate a number of other dimensions on which to evaluate test suites. The talk claims that better test suites are more maintainable, more usable (for instance, because they run faster, or use fewer resources), and have fewer unjustified failures. In this talk, I'll present and synthesize facts about 10 open-source test suites (from 8,000 to 246,000 lines of code) and evaluate how they are doing.

Links: Video, Slides

Going Green: Cleaning up the Toxic Mobile Environment

Thomas Knych (Google), Stefan Ramsauer (Google), Valera Zakharov (Google) and Vishal Sethia (Google)

We will present tools and techniques for creating fast, stable, hermetic test environments for executing Android tests in both interactive development and continuous integration modes. This builds on the higher level talk we presented at the last GTAC.

Links: Video, Slides