Opening Remarks
Tony Voellm (Google)
Links: Video
Opening Keynote - Evolution from Quality Assurance to Test Engineering
Ari Shamash (Google)
You built an app. You launched it. You figured you'd get it out there, build up some volume, get some funding, throw it all out, and then start from scratch so you can "do it right". But, demands for new features are sky high, you are now being asked to push towards unprecedented scale at an unheard of velocity. Yikes! Now what?
You cannot throw it away and start from scratch, you'll just need to evolve what you have, while continuing to add high quality features at breathtaking speed. In addition, you need to ensure that what is already there doesn't break. How do you do this? Fortunately, a new field is forming within the software engineering industry that addresses this common scenario: at Google, we call this "test engineering".
This talk will focus on what test engineering is, how it evolved from quality assurance, and how the industry as a whole has implemented test engineering (with specific examples of how it is implemented at Google).
Testing Systems at Scale @Twitter
James Waldrop (Twitter)
James will be discussing the tools, process, and philosophy that goes into performance testing at Twitter. Particular emphasis will be placed on the Iago open source load testing library, which he wrote to enable Twitter's engineering teams to perform load tests before deploying code to production. The talk will dive into implementation details of some of these tests (including source code) and how complicating factors such as OAuth and arbitrary Thrift protocols are managed.
How Do You Test a Mobile OS?
David Burns (Mozilla) and Malini Das (Mozilla)
This is the problem that confronted Mozilla when we decided to venture into the world of FirefoxOS. Where to start and how to do it was going to prove an interesting task. Come listen to how we solved this problem and how we created a new framework.
Mobile Automation in Continuous Delivery Pipeline
Igor Dorovskikh (Expedia) and Kaustubh Gawande (Expedia)
Expedia started investing in Mobile Web and iOS/Android apps in early 2012. At the same time Test Engineers started developing test automation solutions to build quality and testability into the products from the beginning. In this talk, we will share our experience and learning of utilizing open source tools to build automated testing in Expedia's Agile development and continuous delivery environment. We will talk about Test Pyramid and go into more detail of specific open source tools that have worked well for us. Some of the open source tools we use are BDD tools such as Cucumber, web automation tool Selenium-WebDriver, iOS automation tool Frank, Android automation tools Robotium and Calabash, and Continuous Integration system Jenkins. In addition, we will share some of the Agile delivery principles we strive to adopt like TDD, Pair Programming, Build and Test Radiators. Finally, we will share some of the benefits we have realized from our investment in Agile and test automation and how that is getting us to our Continuous Delivery goals.
Automated Set-Top Box Testing with GStreamer and OpenCV
David Röthlisberger (YouView)
We'll build a video-capturing image recognition system in 3 minutes, using GStreamer's command-line tools and OpenCV. (GStreamer is an open-source media-handling framework; OpenCV —"Open Computer Vision"— is an open-source image-processing library.)
A leading example of such a system is http://stb-tester.com, an open-source tool developed at YouView to automate the UI testing of our set-top boxes. We'll describe stb-tester, the flexibility offered by its GStreamer underpinnings, some of the possibilities it opens up, and the challenges ahead.
Links: Video
Webdriver for Chrome
Ken Kania (Google)
From its start as a Windows-only browser, Chrome has expanded to Mac, Linux, ChromeOS, and most recently, Android and iOS. User level testing of web applications across these platforms has been difficult and necessitated various automation approaches. This talk will describe the work the Chrome team is doing to make WebDriver available for Chrome on all platforms. This will include a technical look at the underlying approach but will focus on how developers can use the new ChromeDriver to write tests for Chrome's various platforms. Also, the current state of the project and a roadmap for its future will be covered.
Karma - Test Runner for JavaScript
Vojta Jina (Google)
Introduction to Karma - test runner that makes testing JavaScript applications in real browsers frictionless and enjoyable.
Testing is not optional when one is building a JavaScript application that must work across many browsers and devices. However executing tests in all of these various environments is hard. Karma turns this typically painstaking task into a piece of cake. It allows you to execute JavaScript tests in real browsers or devices such as your phone or tablet directly from the comfort of your terminal or your favorite IDE.
Links: Video
Automated Video Quality Measurements
Patrik Höglund (Google)
Yes, it is possible to automatically test complex, subjective measurements such as video quality! This talk will show how we constructed a continuous, automated end-to-end test of a WebRTC video call. We'll take a look at the toolchain at a high level and what challenges we ran into while constructing it. This is perfect if you want inspiration for how to take your media testing to the next level.
When Bad Things Happen to Good Applications...
Minal Mishra (Netflix)
The boom of mobile and tablet computing has inundated the software industry with application development platforms. Developing consumer applications on computing platforms have their own magical experience for the end users. Consumer facing software companies always attempt to put their best foot forward when they develop an application for these platforms. However, the biggest challenge in application development only begins after companies roll out the first version of the application. Consumers and the software companies want the latest features and functionalities out of development as soon as possible with the highest quality. This leads to constant code churn in every layer of the stack. We, UI automation engineers, build a variety of detection systems to catch application issues sooner than later. In this talk I will share some of our challenges and successes behind one such detection system, that helped find problems outside the application layer but still adversely impacted the user experience.
Testing for Educational Gaming and Educational Gaming for Testing
Tao Xie (North Carolina State University)
This talk presents Pex4Fun (http://www.pexforfun.com/), which leverages automated test generation to underpin automatic grading in an online programming system that can scale to hundreds of thousands of users. It provides a programming-oriented gaming experience outside of the classroom, training users to learn various programming and software engineering skills, including testing skills such as writing parameterized unit tests. Pex4Fun makes a significant contribution to the known problem of assignment grading, as well as providing a fun learning experience based on interactive gaming. Pex4Fun has been gaining high popularity in the community: since it was released to the public in June 2010, the number of clicks of the "Ask Pex!" button (indicating the attempts made by users to solve games at Pex4Fun) has reached over one million as of early 2013.
Closing Keynote - How Facebook Tests Facebook on Android
Simon Stewart (Facebook)
Facebook is one of the most popular Android applications there is. In this talk, you'll find what Facebook does to ensure that each release is as good as it can be. We'll cover everything from how we manage our code, through our approaches to testing and all the way out to dogfooding.
Opening Keynote - Testable JavaScript - Architecting Your Application for Testability
Mark Trostler (Google)
Testable JavaScript is a process. Whether starting with a blank slate or an already implemented application (or somewhere in-between) being able to test your JavaScript code simply, cleanly, and effectively is a necessary feature. Code that cannot be tested will be rewritten.
While JavaScript is unique due to the myriad of environments within which it runs, there are several tried and true 'testable' methodologies from other languages which also hold true for JavaScript. And of course there remain the unique challenges that JavaScript developers must face while writing and testing their code.
What patterns make code testable? Which anti-patterns hinder testing? What metrics and common sense guideposts can be used to measure the testability of our code? Once the process of creating testable code has started now what?
Join me to break down the process of writing testable JavaScript. We will investigate ideas, patterns, and methodologies that greatly increase the testability, and hence the maintainability, correctness, and longevity of your code. Whether you write client- or server-side JavaScript mastering this process will greatly enhance the quality of your code.
Breaking the Matrix - Android Testing at Scale
Thomas Knych (Google), Stefan Ramsauer (Google) and Valera Zakharov (Google)
Are you ready to take the red pill?
Mobile has changed the way humans interact with computers. This is great, but as engineers we're faced with an ever growing matrix of environments our code runs on. The days of considering only a handful of browsers and screen resolutions are not coming back. How can engineers cope with the Matrix? We'll cover how Google is fighting this testing problem on workstations, in the cloud and in your head...
"I'm trying to free your mind, Neo. But I can only show you the door. You're the one that has to walk through it."
Android UI Automation
Guang Zhu (朱光) (Google) and Adam Momtaz (Google)
As Android gains popularity in the mobile world, application developers and OEM vendors are exploring ways to perform end-to-end UI driven testing of applications or entire platform. With a brief review of existing UI Automation solutions on Android, this talk introduces the recently released Android UI Automator framework, and continues to give an inside look of the framework, typical use cases and workflows.
Appium: Automation for Mobile Apps
Jonathan Lipps (Sauce Labs)
Appium is is a Node.js server that automates native and hybrid mobile applications (both iOS and Android). Appium's philosophy dictates that apps should not be modified in order to be automated, and that you should be able to write your test code in any language or framework. The result is a Selenium WebDriver server that speaks mobile like a native. Appium runs on real devices and emulators and is completely open source, making it a wonderfully friendly way to get started with mobile test automation. In this talk I will outline the principles that inform Appium's design, talk about Appium in the space of other mobile automation frameworks, and introduce the architecture that makes the magic happen. Finally, I'll dig into the code for a simple test of a brand-new mobile app, and demonstrate Appium running this test on iPhone and Android.
Building Scalable Mobile Test Infrastructure for Google+ Mobile
Eduardo Bravo (Google)
Testing native apps in a meaningful, stable and scalable way is a challenge. The G+ have developed efficient solutions to tackle these problems by providing the right infrastructure for each of the complex test scenarios that mobile presents. Our current test infrastructure provides the right tools to both iOS and Android apps to give our development team the confidence that new changes won't break existing clients.
Espresso: Fresh Start to Android UI Testing
Valera Zakharov (Google)
Update [October 2013]: Espresso is now open source. See https://code.google.com/p/android-test-kit/.
Developing a reliable Android test should be as quick and easy as pulling a shot of espresso. Unfortunately, with existing tools, it may feel more like making a double-shot-caramel-sauce-upside-down-single-whip-half-decaf-latte - confusing and rarely consistent. Espresso is a new Android test framework that lets you write concise, beautiful, and reliable UI tests quickly. The core API is small, predictable, and easy to learn - yet it is also open for customization. Espresso tests state their expectations, interactions, and assertions clearly without distracting boilerplate, custom infrastructure, or messy implementation details getting in the way. Tests run optimally fast - leave your waits, syncs, sleeps, and polls behind and let the framework gracefully manipulate and assert on your UI when it is at rest. Start enjoying writing and executing UI tests - try a shot of Espresso.
Web Performance Testing with WebDriver
Michael Klepikov (Google)
In web performance testing, we know pretty well how to analyze a page load. We need to move beyond a page load though: modern apps are highly interactive, and operations tend not to reload the entire page, but rather update it. Different people, myself included, have integrated WebDriver into web performance test harnesses, which helps, but still keeps performance tests separate from the rest of the UI test suite. I propose to build performance testing features right into WebDriver itself, leveraging its recently added Logging API. This makes it possible to collect performance metrics while running regular functional tests, allowing for a much more seamless integration of performance tests into the overall development and test flow. It is also much less disruptive to the custom build/test toolchains that almost every large organization creates.
I will demonstrate this with the new-generation ChromeDriver (WebDriver for the Chromium browser).
Continuous Maps Data Testing
Yvette Nameth (Google) and Brendan Dhein (Google)
Continuous testing is generally about running unit tests and integration tests. But when the data that your server processes is actually the biggest cause of change, how do you ensure that consumers of the data still find it useable and that nothing crashes under the rate of change or a bad change? We'll discuss techniques for continuous data testing with examples from Google Maps.
Finding Culprits Automatically in Failing Builds - i.e. Who Broke the Build?
Celal Ziftci (UCSD) and Vivek Ramavajjala (Google)
Continuous build is one of the key infrastructures in Google. When a build fails, it is vital to pinpoint the culprit changelist (CL)/changelists quickly, so that it can be fixed to get the build back to green.
Culprit detection solutions exist for small and medium sized builds, but not for large integration builds.
Our culprit finder targets finding the culprit CL for large builds automatically, in a very short time-frame with high success. Based on production usage on multiple projects in the last 9 months, culprit finder provides very promising results. Come see our talk to see how we implemented the culprit finder, how successful it is in production and what it feels and looks like.
Empirical Investigation of Software Product Line Quality
Katerina Goseva-Popstojanova (West Virginia University)
Software product lines exhibit high degree of commonality among the systems in the product line and a well specified number of possible variations. Based on data extracted from two case studies - a medium size industrial product line and a large, evolving open source product line - we explored empirically if the systematic reuse improves the quality and supports successful prediction of potential future faults from previously experienced faults, source code metrics, and change metrics. Our research results confirmed, in a software product line setting, the findings of others that faults are more highly correlated to change metrics than to static code metrics. The quality assessment results showed that although older packages (including commonalities) continually changed, they retained low fault densities. Furthermore, the open source product line improved in quality as it evolved through releases. The prediction based on generalized linear regression models accurately ranked the packages according to their post-release faults using the models built on the previous release. The results also revealed that post-release fault predictions benefit from additional product line information.
AddressSanitizer, ThreadSanitizer and MemorySanitizer -- Dynamic Testing Tools for C++
Kostya Serebryany (Google)
AddressSanitizer (ASan) is a tool that finds buffer overflows (in stack, heap and globals) and use-after-free bugs in C/C++ programs. ThreadSanitizer (TSan) finds data races in C/C++ and Go programs. MemorySanitizer (MSan) is a work-in-progress tool that finds uses of uninitialized memory (C++). These tools are based on compiler instrumentation (LLVM and GCC), which makes them very fast (e.g. ASan incurs just 2x slowdown). We will share our experience in huge scale testing using these tools.
Closing Keynote - Drinking the Ocean - Finding XSS at Google Scale
Claudio Criscione (Google)
Cross site scripting, or XSS, is the modern-day equivalent of the middle-ages black plague in the web application world: it's widespread, it's bad and there are little or no technical ways to detect it until it's too late. DOM XSS is a particularly nasty variant of those, as it requires a real browser or equivalent to be detected: a difficult problem with little automated solution available.
We needed powerful, self-driving tools to identify DOM XSS early in the development lifecycle, usable by engineers outside of the security team: all we wanted was a product which could scan our huge, fast moving, highly complex and arcane corpus of applications... and of course, we found none. So we built our own: a web application scanner targeting DOM XSS designed on top of standard Google technologies. It runs in App Engine and leverages the powerful Chrome browser and some hundreds of CPUs as a security scanning platform.
It is also a nice citizen of the testing arsenal at Google: it lives inside our testing infrastructure, instead of being the instrument of the security team.
In this talk we outline our novel approach, the challenges we faced in scaling our system to Google size and the ideas behind our detection and crawling models on JavaScript intensive applications.