UCAAT2016 Accepted Abstracts

Zoltan Elzer, Patrik Paksy and Benjamin Teke. Big Data interpretation and challenges in mobile network testing

Abstract: The complexity of mobile networks is continuously increasing, meshing whole countries, including several network functions and interfaces between them. Ericsson is one of the biggest manufacturers of network functions like radio base stations, user databases and application servers. All these functions run a software logic that handles signalling protocols, maintaining user and service information and communicates with peer functions in a standardized way. Despite the well-defined functional roles and signalling protocols, testing every node and every interface between them individually does not necessarily imply that the network as a whole behaves as requested. Therefore, network level testing of signalling sequences are necessary and useful. 
Testing of telecommunication networks are performed on multiple levels. The complexity of testing and amount of data need to be maintained and processed are exponentially increasing when we go from single software module tests to end-to-end network level acceptance test of a whole network. Monitoring the network and troubleshooting problems is essential part of testing flow. Automation of such a testing requires the support of multiple protocols and smart techniques to correlate signalling messages captured on different network interfaces. Based on the sheer size of a network and its data traffic, it also needs a fast, efficient and scalable way of performing this correlation, which can be solved by using Big Data processing on a computer cluster. These technologies are not only giving us a way of managing this amount of data without the need for a supercomputer, but with the right tools and implementation, online, near real-time network analysis is also possible. 

In addition to the new data handling approach automatic recognition of the network and network elements, categorizing signalling flow, and automatic highlight of faulty scenarios can also quicken the whole testing flow. With help of this, testers are able to focus only on the most important data from huge amount of available information, and later as an evolution, it would be possible to apply automatic procedures for well-known situation. 

 

Andras Naszrai. Contract testing in the cloud in GE Healthcare

Abstract: We have created a data drive contract test framework to lower the integration costs of our cloud based software and we put this framework into the cloud to use it as a testing as a service solution. We use the same framework to automatically simulate the surroundings of our software components to further decrease the cost of our contract testing efforts by creating on-demand isolated environments.

 

Andrew Pollner. Test Automation Engineering

Abstract: As with testing itself, test automation does not have a formal pedigree of education or industry standard support. While this is all starting to change, the application of test automation for a given project is still not a guaranteed success, given the technical and other challenges.
By creating a body of knowledge for test automation engineering, the ISTQB is helping to define standard concepts that anybody working with test automation should know and be able to apply to their test automation efforts. This presentation will be a discussion of the ISTQB Advanced Test Automation – Engineering Syllabus, a published document currently in beta.

 

Harry Sneed. Assuring the Quality of Test Cases

Abstract: IT users are now making a significant investment in testing their applications. A major part of that investment is devoted to the creation and maintenance of test cases. Many organizations have already several thousand test cases. The quality of test cases is however not enough. The test cases must also be of sufficient quality, meaning that they not only cover the requirements but also that they are consistent and complete in addition to being formally correct. The users must know that they can rely on their test cases. For that reason the quality of the test cases should be checked. This presentation describes an approach to ensuring the quality of test cases by checking them against the requirement specification and the design as well as against the formal rules of defining test cases. The approach has been implemented in a tool and applied in several IT projects. The result indicates that this approach can be highly useful in assessing the quality of test cases.

Teresa Song: Continuous Delivery with Efficient Automation Testing System in Cloud

Abstract: Test efficiency is key to support product continuous delivery with high quality and frequency. In the rapid changing Internet of Things (IoT) market and with high quality requirement from customers, how to release a product with huge number of components quickly and flexibly becomes a big challenge to us:  How to manage bigger number of test case with the feature adding;  How to test easily for more complex scenarios;  How to secure short test case execution time;  How to efficient the hardware usage; The real practice we working on “Risk Based Testing Model” and “Cloud” helped very much, and Openstack was selected as the cloud service tool:  Manage the test scope by introducing Risk Based Testing Model;  Decoupling test among components by clouding skill;  Shorten the test environment preparation time by calling Openstack API, and standardize test environment with predefined image pool;  Centralize hardware resource management: auto allocate more resource to key test
ETSI UCAAT 2016 Presentation Proposal
October 26-28 2016, Budapest, Hungary page 2 of 3
activities in different phase We take about half a year to move all test cases (>10,000) to cloud seamlessly. Time-to- faultfinding is reduced by 30%, product quality is intuitive and demo on office TV every day, release candidate become available always. The way how we do clouding test and the learning we gain when moving test on cloud can help others who face the similar difficulty and especially for the one who work for multi-components product. And also I hope I can have this chance to communicate with other test experts about clouding test.

Branka Rakic. Continuous testing starts with the smart architectural decisions

Abstract: The quality of software is based on synergy of all the parties involved in the effort: Development, Testing and IT Operations. From that perspective, the software testing process will start from the moment when the first line of code is created and submitted to the code repository, to ensure that success criteria and quality requirements will be met and will be compliant with the agreed Definition of Done. 

The efficient test strategy is based on a proactive approach to build, maintain and regularly check the code quality throughout the entire software lifecycle. It builds on current best practices, aiming to extend and refine them as much as possible in the direction of test automation. Tests are automated and integrated into the build process as early as possible. A continuous testing platform runs in the background, automatically executing the tests and ensuring that issues are identified almost immediately. Thus reducing the time-to-release and closing the circle to ensure a successful Continuous Delivery process. 

In this presentation, the case study about a Public Transport Software will be demonstrated. The system consists of Android tablets and different sensors located in vehicles which are sending messages to the back-end. The back-end processes the messages and forwards them again to vehicles. 

The implementation of the system is in C# and the continuous testing is implementing using NUnit framework. In order to support fully automation, the tools like a message generator and data player have been implemented. Furthermore, publish–subscribe pattern provides exchanging messages between system components and test hooks at the same time. The other challenges that have been faced are simulating real scenarios like sending messages from a fleet where the messages are produced by multiple vehicles with multiple IPs. 

In order to have high stability of the tests, a clean test environment is provisioned using PowerShell DSC scripts for every execution as a part of the continuous pipeline. Also, every test is independent - have own data set. NDBunit enables creating required data set which is removed in the same test enabling clean database for the other tests.

 

Sebastian Dengler and Patrick WunnerAutomated System Testing towards Continuous Integration in Embedded Environments (Automotive Industry)

Abstract: Continuous Integration, already implemented for a large range of software development domains, is also becoming more and more important in embedded environments like the development of electronic control units in the automotive industry.

Recently, we introduced a concept to integrate automated Hardware-in-the-Loop testing of automotive electronic control units into an emerging continuous integration environment. The continuous integration environment considered by our concept is based on a Jenkins server and the test framework used for automated testing and reporting is iTestStudio, which is a proprietary development. 

In this presentation, the intention is to share our recent experience with implementing this concept and its application to a real life customer project. 

The talk consists of three parts. 

In the first part, the importance of continuous integration for the development of embedded systems, especially automotive electronic control units, will be highlighted and possible use cases with focus on testing activities will be introduced. 

In the second part, our conceptual approach for integrating automated Hardware-in-the-Loop testing into commonly used continuous integration environments will be presented. 

The third part is supposed to exchange our experience while implementing and applying the approach to the test process of a real life customer project. 

The talk concludes with best practice recommendations and lessons learnt. 

 

Gabor Megyaszai. Streamlining performance verification through automation and design

Abstract: While addressing the change necessary to achieve continuous delivery capability and DevOps practices as one of the first obstacles we had to face was the overhead put on the delivery time by the traditional way of performance verification. The creation of environments and long test executions – mostly derived from insufficient practices – heavily limited the timeframe and even the content of each release. Problems originated in the low level of automation of SUT creation and configuration and the automation difficulty in test case execution, especially in result analysis. 

To establish baseline we collected data from the past two years about number of environment bring-ups, bring-up time, number of test case executions and test results being in “Not analysed” state. 

To tackle this complex problem we took two parallel path. On one hand, we started redefining how we are doing performance tests, by reducing the test and SUT complexity to the smallest possible increment. On the other hand, we get heavy automation work going to cover configuration and execution. 

We created a scalable dynamic deployment framework (AvED – Automated virtual Environment Deployment), with which we can carry out on demand SUT and non-SUT deployment in a fraction of the original configuration time. This Python based application carries out the deployment through REST API and triggers the configuration system. The configuration can be read from three different sources: predefined XML, described by test requirement, selected manually (for special circumstances). We joined AvED with Jenkins effectively fitting performance verification into our SCM/CI pipeline. With the utilization of AvED and scalable systems we are currently able to deploy 48 parallel test environments to our available cloud capacity both on VMware and Openstack infrastructure. 

To test the performance of different possible cloud variants we created a Python based measurement system to establish the performance baseline on reference configuration and developed a Python, Power CLI and Bash based performance predictor tool which utilizes IPSL and Gatling. The results of performance prediction can be further refined and expanded by the use of Clover (Cloud Verification) tool. With the use of predictor we can already specify the VNF performance on a never before tested infrastructure without having to actually deploy the VNF itself. 

In order to automate the test result analysis we are developing a SUT behaviour and data discrepancy recognition frame along with selective log analytics. 

With our renewed test cases and increased automation, we significantly decreased performance verification turnover time from months to weeks or days and we are able to provide load/mass traffic testing feedback to development even in the desired 2 hours long CI cycle. With the reduction of testing cycle times we are able to introduce new types of tests into our delivery process, such as new ways of chaos and robustness tests with Pumba or Faulty Cat, which ultimately leads to higher coverage and quality affecting all stakeholders throughout our VNF delivery. 

In my presentation, I would provide a general overview on how we redesigned our test cases in order to meet automation possibilities, and would introduce our dynamic deployment framework and its connections towards the SCM/CI pipeline and automated test execution frame. 

I would demonstrate the key benefits of simplification in automation by which we were able to create a modular and generic system which can be applied companywide or even outside Nokia. 

 

Stefan Dorsch and Andreas Ulrich. Embracing Non-Determinism in Testing

Abstract: Software-Engineers shy away from non-deterministic behaviour in their software systems they develop for good reasons. They fear that non-determinism causes unintended and unknown system states, from which the system cannot recover and continue performing its anticipated task. Similar arguments are given by software testers. Their fear is that test runs cannot be reproduced and determining the reason of a discovered fault becomes a highly costly, if not unmanageable task. While these arguments cannot be ignored, there is a merit for non-determinism in testing. The major benefit is that non-deterministic tests enable the detection of system faults that are hard to discover using conventional means of test design such as tests derived from user stories or testing based on code-coverage criteria.

 

Istvan TuraiTamas Cser and Ray GrieselhuberHow Testing Automation is the Perfect Domain to Apply Machine Learning

Abstract: Software testing, today, has been outpaced by modern agile development methodologies. The tools and techniques, while evolving, still rely on manual operation or coding of automation and these solutions result in software projects that lack intelligence. This leads to an enormous bottleneck in the software development process, as new features and updates are frequently delayed by the amount of time it takes to test them. To be sure, many positive advances in the domain of behaviour-driven development have contributed to better automation solutions. Testing at the functional layer remains a problem. Advances in browser automation technology, especially open-source tools such as Selenium, have helped greatly. Many problems, however, persist. 

Fortunately, we are at a time where AI and machine-learning technologies are now viable for small teams. We have spent the last 2.5 years working on adapting these technologies to the problem of automated testing. This experience has confirmed our hypothesis that the future of automated testing is in intelligent systems learning from the websites and applications where it is applied. 

In this session, we will share what we have learned, the specific challenges we encountered, and how we solved them on a technical level.

 

Florian Spiteller. Automated Testing for Autonomous Cars? Challenges, Solutions, Opportunities

Abstract: Automated testing, mostly using the Hardware-in-the-Loop approach, has been the industry standard to help ensure the correct behavior of safety relevant software in automotive ECUs. These ECUs are commonly connected to miscellaneous systems, sensors and actors inside the car. New developments in the field of Vehicle-2-X communication and autonomous driving raise the need to also test systems which are wireless-connected to the environment (infrastructure, other vehicles) surrounding the car. 

The talk will show that this cannot be done by simply using the proven test methods and will point out the challenges, e.g. time synchronization of distributed but connected events. This is done with the help of a realistic example: A connected car (1st time-base) approaches a junction equipped with intelligent infrastructure (2nd time-base), the interaction is verified with external test equipment (3rd time-base). As the mentioned new technologies are about to enter the market soon the industry is, also based on the authors own industry experiences, in urgent need for new and adapted test methods. 

The main part of the talk is focused on presenting an approach to test connected systems using existing toolchains in a real world scenario. With the help of a self-developed, deterministic wireless bridge the fault injection method can be utilized also with connected systems allowing an automatization of the test. To better explain this test method, the above mentioned example is extended to demonstrate an automated test of an autonomous car. This also shows the high relevance of the described test method as in the near future more and more systems for connected and autonomous cars have to be tested. With the help of the described scenario the automated test could not only be used for testing itself, but also for machine learning purpose.

The author will finally explain the advantages, disadvantages and faced challenges of the shown concept. To help adapt the approach also to other industry fields the main development steps will be outlined. The talk concludes with a summary pointing out the key-takeaways and lessons learnt. 

 

Pekka Aho and Matias Suarez. Automated regression analysis through graphical user interface

Abstract: A large part of software testing through graphical user interface (GUI) is still performed manually. Although capture and replay (C&R) tools are commonly used to automate the most critical test cases but maintaining a large test suite with C&R tools usually requires too much manual effort as the GUI changes often during the development. Model-based GUI testing (MBGT) has been proposed to reduce the maintenance effort but most MBGT tools require specific modelling expertise and significant effort in creating the models. 

We would like to introduce the concept of automated regression analysis through GUI and present our experiences in using it for more than a year in automating the regression analysis and testing during the development of commercial end-user GUI software products. The concept of automated regression analysis through GUI is quite new, and the authors are the only ones having academic publications on the topic. 

The idea in automated regression analysis through GUI is to use dynamic analysis to automatically extract a new behavioural model of each version of the GUI and automatically find the changes in the GUI by comparing the extracted models of consequent versions. The method has very high level of automation, the manual work remaining in going through the detected changes and deciding if the change was intentional or a regression bug. 

Murphy is an open source tool for automated regression analysis and testing through GUI. Originally Murphy was developed as an internal tool for F-Secure Ltd but it has been open sourced. 

The experiences from F-Secure Ltd have been very positive. Continuous integration tool Jenkins triggered Murphy tool to extract a new model of the latest version of the GUI application three times a day, comparing the new model with the previous version, and reporting changes as web links attached to email. Although a large part of the changes were intentional or “false positives”, it did not require too much work to visually inspect the results graphically presented as screen shots 3 times a day. Compared to the earlier automated test scripts, especially the maintenance work was significantly reduced. 

 

Ish Kumar and Stephan SchulzIndustrial deployment of MBT-based test automation in a large enterprise IT program

Abstract: Many people talk about MBT in an industrial approach – and possibly report about piloting it. We have deployed MBT based solution which encompasses tools from multiple vendors for multiple years now in a large enterprise IT testing program. As part of our next steps we are extending our approach to also include and integrate behavioural driven development techniques used in our DevOps environments. 

The work presented in this presentation demonstrates full test automation of the testing process in a real world industrial setting. The solution presented completely automates test design, test execution, as well as the interface towards requirement and test management. MBT based solution also helps identify the impact of change in requirements on testing scenarios. 

The solution has been applied successfully for testing of more than 20 mission critical applications and more than 40 end-2-end services within a large industrial Enterprise IT program in the telecommunication sector. The goal of the presentation is to share information how this success was achieved and naysayers within the organization were overcome using this test service provider as an example.

 

Karl AmbrusModelling of Complex Distributed Test Scenarios

Abstract: Aircraft are long living products (40 years) with a lot of upgrade programs and enhancements during their lifetime. To handle this it is essential to guaranty a continuity of the test environment and to cover the growth of functionality, complexity and variants of the aircraft systems.

For the integration environment the main focus is the enhancement of the toolset and methodology for system integration by extending the capabilities with new methods like test case generation, parallel virtual testing and highly automated regression testing. 

The classic way of testing is based on S/W testing, subsystem and system testing each using a different toolset and test procedures. Testing of virtual equipment using simulation environments and system integration on test benches are completely independent worlds. This causes a lot of effort by rewriting test procedures when switching between real and virtual integration tools. 

The investigations done at AD&S show that virtualization of the test environment and distribution of test services combined with automatic test case generation is a promising starting point towards highly parallel testing of complex test scenarios using distributed virtual test environments. Goal is the portability of test procedures between compatible real and virtual integration platforms. 

The method for an automated test process for testing a distributed complex environment starting with a test model implementing a test variant management, the test case and test script generation and the test execution is shown in the presentation.

Similar challenges for shorter, faster and better testing strategies and test coverage also show up in a lot of different industrial sectors like cars, trains and other complex systems. 

Fully automation of the system integration process based on modelling of test scenarios and use cases, automated test case generation, virtualization of the test process and distribution of the test execution to test services on distributed virtual test platforms is the key enabler for handling of more and more complex test scenarios and system variants. 

Jürgen Großmann and Dorian Knoblauch. Fuzz Testing ITS

Abstract: Intelligent transport systems (ITS) can make an important and innovative contribution to efficient, cleaner and safer mobility. However, safety and efficiency are directly related to the quality and well-functioning of the underlying technical infrastructure. European standardization organizations like ETSI have taken major steps to support the take of ITS through standardization and testing activities. While currently ETSI provides support for conformance and interoperability testing, a systematic robustness and security testing approach is missing. To overcome this issue, we have integrated our Fuzz Testing Library Fuzzino in the ETSI ITS Conformance Test Suite provided by Spirent. 

Fuzz testing is a well-known, effective and widely accepted approach to identify and locate robustness and security related weaknesses and vulnerabilities in software-based system. Fuzz testing is about systematically injecting invalid or unexpected input data to a system under test. That way, security-relevant vulnerabilities may be detected when the system under test processes such data instead of rejecting it. Fuzzino (https://github.com/fraunhoferfokus/Fuzzino) is a library that supports the generation of test data for fuzz testing. It provides a set of data generation heuristics that target known weaknesses (e.g. integer or buffer overflows) and allows for finding new weaknesses by randomly modifying test data. 

Building our approach on top of the existing ITS Conformance Test Suite serves us with the required data type definitions, adapters and codecs and thus allows us to apply fuzz tests on all layers of the ETSI ITS communication stack. Test data generation is done on basis of the existing ASN.1 specifications included in the ETSI standards and the ITS Conformance Test Suite. The PDU data templates that are used for the conformance tests are taken as seeds for the fuzz data generation. Due to a previous model extraction the fuzz data generator is fully aware of the specifications and type boundaries of each designated field of the PDU. This Information is used during the mutational fuzzing process. For test specification and execution, we made use of Spirent’s TTworkbench and thus rely on an industry grade, standardized and highly flexible TTCN-3 based test automation system. Our fuzz testing approach covers communication scenarios at all layers of the ITS architecture and aims to systematically generate communication data in such a way that they are still accepted by the SUT but stress and violate the boundaries of specification. 

We have successfully evaluated our approach with different ITS devices from different vendors. We have been able to find vulnerabilities that crash the communication stack at least at on of the devices and applied a number of optimization strategies to minimize the number of generated test data and thus test cases. In our presentation we will outline our fuzz testing approach, we will describe how we have integrated the approach in the ETSI ITS Conformance Test Suite, and provide an overview on different fuzz testing strategies and their implication with respect to test suite size, test execution time and their capabilities to find vulnerabilities. Finally, we will provide an outlook on how our ITS Robustness and Security Test Suite can be used as part of industry grade security assessment processes and during ETSI events like the ETSI ITS Plug Test series. 

 

Wojciech Tanski and Tomasz Lewczyk. Testing untestable – best practices in developing testable application.

Abstract: The presentation is based on a real case from one project where testabiliy of the product was not a top priority for developers a couple of years ago. Now, with growing complexity of application, manual testing is not enough to ensure good quality. Forgetting about testability years ago resulted in many issues during test automation now and increased its cost significantly. In addition to higher cost of test automation it causes also lower stability what can be avoided when certain rules are followed. Regarding novelty – you can easily find in internet articles about creating software that is testable from unit tests perspective but as for now no one talks about what makes application testable from acceptance tests perspective. We will show some lessons learned in regards to removing obstacles by using some workarounds in test automation area where application was written in a way it was hard to write automated acceptance tests. Anyone will be able to introduce those best practices in his/her organization. Presentation will be based mainly on testing web applications using Selenium with some examples on desktop applications and a OCR application framework - Sikulix - as a “last chance” tool that supports desktop and web applications testing. Every attendee will be able to apply those best practices in his/her projects. Any organization can benefit from our approach as it is universal method for showing how to properly create software so it is more efficient for automated testers. The key-takeaway is the knowledge on how to better collaborate between testers and developers to achieve faster and cheaper high quality software. Last but not least our motto: “not taking into consideration the testability of the product does not mean it cannot be tested in an automatic way, but the cost of it will be high so this topic should not be neglected since the very beginning of the project”.

 

Patrick Harms and Jens GrabowskiExperiences with Automated Field Usability Testing Using Generated Task Models

Abstract: Web portals are the key communication channels for most businesses today. They can range from simple representation of a company, via online shops, to integrated platforms as a service. The latter ones are usually hosted by the company for its customers to provide certain functionality, such as an issue tracking system. As business changes daily, such web portals need to adapt flexibly. This ranges from smaller changes of a certain aspect up to a full relaunch of a website.

In this required dynamicity, website managers seldom have sufficient time for applying usability engineering methodologies, such as user testing or expert-oriented evaluation. Instead, changes are done and rolled out as fast as possible and directly to users. In case a change causes usability problems, these may not show up directly. Instead only in the long run they may represent as decreased conversion rates, disappointed users, or more work for the help desk. In such situations, it is hard or even impossible to determine, which of the previous changes may have caused the issues.

In our work, we developed a methodology for model-based, automated usability engineering of software including websites and desktop applications. Herein, we record the actions that users perform on the level of mouse clicks, text entries, and even individual key presses. From the recorded data, we generate a model representing the typical tasks users perform with the software. In addition to our analyses, these models may serve as input for usage-based test case generation. Afterwards, we analyse the recorded data and the model for known usability smells. These are patterns of user behaviour that indicate a potential usability issue. The smells have a direct reference to the involved parts and elements of a website, a description of the potential usability issue, and a proposal for its removal. We validated our approach in three case studies and showed that it is capable of providing helpful results.

In the presentation, we plan to briefly describe our approach and show some example results. In addition, we will describe the intended usage of our approach for a company’s web portal so that a continuous measurement and assessment of the portal’s usability is done. Depending on the average number of users per day, first representative analysis results can be available in short term in each iteration cycle of the website. In addition, we will present our work in progress focussing on a cloud platform as a service solution. This platform allows users of our approach and our tooling to use a preinstalled and preconfigured infrastructure to perform analyses with just one click. We will also show how easily a recording of a website can be configured using our tooling with state of the art content management systems. We hope to get into fruitful discussion with potential users of our approach resulting in valuable feedback.

 

Martin Gijsen. Forget silver bullets and be context-driven

Abstract: While situations may look the same, they rarely are. So when considering how to approach test automation, it is not useful to look for a silver bullet solution or even accept one when offered. An approach that will result in long lasting test automation success depends on the context, the situation. The questions to ask relate to the PuPPET areas (People,  Processes, Policies, Environment and  Technologies). 

Real life examples from personal experience will show how the answers and the resulting test automation approach can (and normally should!) differ from one project to the next.

 

Gaspar NagyProperty Based BDD Examples

Abstract: BDD (Behavior-driven development) is a software development process focusing on active collaboration, which illustrates and automates the requirements using key examples of the problem domain. In BDD the formalized examples use a natural language-based DSL driven by the Given/When/Then keywords. 

At the same time, property-based testing (PBT) uses abstract (mathematical) formulas to declare expectations for the output values given some constraints on the input. The PBT tools try to disproof that the application fulfills these requirements by taking samples from the valid input value space. 

The experience shows that for understanding and properly implementing the requirements, the team has to understand the requirements as a set of abstract rules, where collecting key examples can help a lot. 

BDD is strong at managing and automating these key examples and PBT is strong at defining and automating the rules, so the question should arise: can these two methods somehow support each-other? Can we increase the value of BDD by formalizing and testing the rules? Can we increase the value of PBT by a better communication of the rules and constraints we have? 

This session shows an experiment of combining BDD and PBT. It provides examples of how the specification could look like in this combined world and what it could be used for. And there is a working, open-source prototype too! 

 

Abbas AhmadElizabeta FourneretBruno LegeardFranck Le GallNaum Spaseski, Elemer Lelik and György Réthy. MBT to TTCN-3 tool chain: The oneM2M experience

Abstract: The Internet of Things (IoT) has increased its footprint becoming globally a 'must have' for today's most innovative companies. Applications extend to multitude of domains, such as smart cities, healthcare, logistics, manufacturing, etc. Gartner Group estimates an increase up to 21 billion connected things by 2020. More and more platforms are being developed to respond to this fast growing demand. To manage the heterogeneity of the things and the data streams over large scale and secured deployments, IoT and data platforms are becoming a central part of the IoT systems. To respond to this fast growing demand we see more and more platforms being developed, requiring systematic security and functional testing. Our solution proposes a Model-Based testing approach to generate and execute TTCN-3 code.

 

Teemu Kanstrén, Jussi Liikka and Jukka Mäkelä. Testing IoT services and devices in a 5G test network

Abstract: In UCAAT 2015 we presented the overall architecture of our 5G test network and the testing technologies deployed on top of it. In this presentation we describe the first concrete test cases executed in this environment, the new techniques we developed to make these possible, and the lessons learned from these. We also briefly address some of the unanswered questions by participants from UCAAT 2015 such as the scope of our virtualized EPC (evolved packet core) and how we use different (virtualized) monitoring tools to capture overall monitoring data at global network level.

 

Ting MiaoNak-Myoung Sung, Jaeyoung Hwang, Naum Spaseski, György Réthy, Elemer Lelik, Jaeseung Song and Jaeho Kim. Development of an Open-Sourced Conformance Testing Tool – oneM2M IoT Tester

Abstract: IoT server platforms are required to be implemented to support communications with diverse devices supporting different type of protocols and undertake a huge amount of data storing, processing and retrieving tasks in real-time manner, which in turn exposes new challenges for testing IoT server platforms in terms of supporting multiple protocols as well as testing complexity, speed, efficiency and cost etc. In addition, when the conformance testing is taken into consideration for the IoT platforms, it would be a huge work and takes long time if all the required functionalities of a standard are manually tested. To overcomes these testing challenges, we propose to apply the TTCN-3 test language to enable the automation testing, reuse the open-sourced Eclipse Titan environment, and design a system adapter associated with a codec for HTTP protocol to enable the communications between the test system and the tested IoT platform. In practice, we extended the Eclipse Titan by implemented a system and codec for oneM2M HTTP binding protocol and it turns out to be a conformance testing tool, named oneM2MTester. The performance of the oneM2MTester has been evaluated in the 2nd oneM2M Interoperability Event held in May, 2016.

 

Ksenia Vecherinina and Martti KäärikWeb test automation using TTCN-3 and MBT framework

Abstract: Automated testing of web sites is by no means an unknown topic for anyone and many languages have been made available for that purpose. However, the simplicity of most commonly used testing languages such as Cucumber and Robot quickly becomes a hindrance when you need to work with large data structures. This is where TTCN-3 come in. 

This presentation introduces a solution that enables the use of TTCN-3 language and tools for testing web applications. The implementation has been in use for several years in the industry and a case study is presented together with the technical solution. 

Based on the experience, the presentation points out the advantages of using TTCN-3 and MBT in the testing and development process. It also mentions the aspects of web testing that still need improvement.

 

Emmanuel Gaudin and Mihal BrumbulliTest cases to find the best architecture in terms of performance

Abstract: When performance requirements are defined, they are always related to a set of scenarios that serve as a basis for analysis. The scenarios are derived from the high level requirements of the system and are actually validation test cases. That means validation test cases are useful to help defining the architecture of a system before the development has started. 

The presentation will show how to estimate the best architecture based on an abstract executable model containing the performance estimations, a set of possible architectures, and a set of typical test cases.

 

Abderrazek Boufahja, Eric Poiseau and Alain Ribault. Model Based Testing and Coverage of XML Requirements

Abstract: The use of XML based standards is growing nowadays, especially with the adoption of RESTful and SOAP communication layers. The conformity of messages gains in criticality, especially for complex transactions and critical domains (healthcare exchange, ISOBUS XML transactions, security, etc). The interoperability between systems highly depends on the conformity to standards of the received messages. The validation of XML documents based on standards specifications is a complex task especially for complex XML structures. Some validation tools exist, like schematrons validation, however, based on the IHE Europe experience with these tools, they have many drawbacks: complex to use and to implement, difficult maintainability, and a slow time of execution. Under the IHE Europe and Kereval health lab, we developed for several years a new model-based methodology to validate XML requirements: Gazelle ObjectsChecker. This methodology is a combination of multiple technologies from MBT and requirements analysis: OCL, UML, DresdenOCL, Topcased, TAML, and Acceleo templates technology. The aim of this methodology is to simplify XML requirements checking, to facilitate the maintenance of the tools created, and to deal with the weakness of schematrons. 

The inputs for Gazelle ObjectsChecker are UML class models defined using specific stereotypes, and containing OCL constraints. These constraints describe the requirements coming from XML based standards. The output of this tool is a set of generated JAVA classes allowing easily validating XML documents. We generate also documentation and a testing suite for all the constraints. 

This new methodology has reached now a high level of stability and usefulness. It has been deployed in dozens of environments and healthcare infrastructures, more than 100 validators were created using this methodology, and they are especially used by IHE Europe to organize European interoperability testing sessions (or what we call Connec, and accredited testing sessions. Currently, dozens of thousands of XML documents were validated based on this methodology. The use of this methodology has proved its strengthen regarding requirements coverage, but also it proved that this methodology is easy to maintain and faster than other validation tools.

 

Ceren Şahin GebizliHasan Sozer and Ali Ozer ErcanSuccessive Refinement of Models for Model-Based Testing to Increase System Test Effectiveness

Abstract: Model-based testing is used for automatically generating test cases based on models of the system under test. The effectiveness of system tests depends on the contents of these models. Therefore, we introduce a novel three-step model refinement approach. We represent system models in the form of Markov Chains. First, we update state transition probabilities in these models based on usage profile. Second, we update the resulting models based on fault likelihood that is estimated with a static analysis of the source code. Third, we update these models based on error likelihood that is estimated with dynamic analysis. We generate and execute test cases after each refinement step. We applied our approach in the context of an industrial case study for model-based testing of a Smart TV system. We observed promising results, in which new faults were revealed after each refinement.

 

Lajos Cseppentő and Zoltán MicskeiEvaluating code-based test input generator tools

Abstract: In recent years several tools have been developed to automatically select relevant test inputs from the source code of the system under test. However, each of these tools has different advantages, and there is little detailed feedback available on the actual capabilities of the various tools. In order to evaluate test input generators we collected a representative set of programming language concepts that should be handled by the tools, and mapped these core concepts and challenging features like handling the environment or multi-threading to 300+63 code snippets respectively. These snippets would serve as inputs for the tools. We created an automated framework to execute and evaluate these snippets, and performed experiments on five Java and one .NET-based tools using symbolic execution, search-based and random techniques. The test suites’ coverage, size, generation time and mutation score were compared. The results highlight the strengths and weaknesses of each tool and approach, and identify hard code parts that are difficult to tackle for most of the tools. We hope that our research could serve as actionable feedback to tool developers and help practitioners assess the readiness of test input generation.

KEYNOTE, Isabel Evans, Independant Consultant - UX? What about TX for Test Automation?
Test automation is intended to increase speed and accuracy of information about the SUT partly by allowing engineers to improve the speed and usefulness of their communications. The best possible interfaces and user experience for the person testing are required to support this otherwise the use of automation will decrease rather than increase velocity of projects. If the speed and accuracy of test information provided to teams is lowered, with poor test reporting and inaccurate decision making, engineers and managers will become frustrated. It may even lead to disaster.
Good automation tools will help us make good decisions about the SUT and maximise the value of the limited time we have to deliver software products to market. Poor automation tools will delay decision making, increase the likelihood of errors of judgement, and they frustrate both engineers and managers.
Current practices (agile, build pipelines, devops) arise from a need to address delivery speed and accuracy as well as engineering quality. But automating the tests and then forcing people to spend time inaccurately and slowly interpreting the outcomes simply is not cost effective or helpful. There are many examples of poor interfaces and tools contributing to, or even forcing, humans to make bad, even fatal decisions. Examples such as the London Ambulance Dispatch system (http://bit.ly/1tr2TJZ), and the EU Farm Payments online application system (http://bbc.in/1xEoUE3) show us that poor interfaces can be time wasting, expensive and dangerous.
This matters for test automation because, although automation tools are written and serviced by engineers, the people who use the automation can be non-technical for example user acceptance representative, product owners, business sponsors, managers or end users. Does the information they get from the automation and provide to engineers improve in speed, accuracy and usefulness as a result of the automation or not? What will maximise our ability to get the most from the test automation? What will maximise the accuracy and usefulness of the information provided to engineers, managers and others?
There is a need for TX for Test Automation: that is, the Test Experience for those people who will request, design, and review the results of the automated tests and monitoring. This requires information design and delivery (arguably the purpose of our industry). Attention to detail “front of house” for the UX for customers and end users can be extended behind the scenes to TX for the engineers, benefiting all. TX can be improved, by consideration of the UX for the automation tool and the tests so that methods and lessons from User Experience Design (UXD) and User Experience Testing (UXT) may be applied to test automation. 
Isabel will consider human factors (as the engineers are human too) as well as the support of improved decision making around quality, speed and accuracy of responses to issues.
Three key points:
-          Test automation requires consideration of the UX for the tool and the tests;
-          People who use automation might not always be technical but they are always human;
-          UXD and UXT for test automation supports improved decision making and quality.
 
TUTORIAL Elizabeta Fourneret and Bruno Legeard. Security Testing using Models and Test Patterns
Abstract: The tutorial will present an approach and best-practices on model-based security testing that have reached maturity during the last 6 years. The approach is based on test patterns and has been integrated with an industry-strength model-based testing (MBT) tool-chain. 
The tutorial will further present examples on the usage of models and test patterns for security testing in the context of security components or middleware software, such as Hardware Security Modules (HSM) or IoT platforms, respectively. In addition, our experience has resulted in a positive impact on test automation and fault detection effectiveness. Thus, we will further discuss the lessons learnt and the key messages on applying MBT using test patterns in the industry.

 

TUTORIAL Philip Makedonski, Andreas Ulrich, Martti Käärik, Gusztáv Adamis, Finn Kristoffersen and Xavier Zeitoun. Testing and domain-specific modelling with TDL

Abstract: The tutorial provides an introduction to the ETSI test description language TDL. TDL fills the gap between high-level test purpose descriptions, which are often given in a natural language, and executable, concrete test cases. While concrete test cases realize the behaviour of the tester for a given test purpose, TDL provides the user with a language to specify scenarios of interactions between tester and SUT, which detail a test purpose description sufficiently to enable partially automated translation to concrete test cases.

 

TUTORIAL Isabel Evans. Human factors for test automation and industrialisation

Abstract: Although this conference is about automation, people are at the heart of what is to be achieved by that push to industrialisation and tooling. People in teams are making the change from manual to automated testing and therefore factors of attitude to change, teamwork, motivation and communication are going to be very important. If automation projects are to succeed, we also need to consider human factors required for success. Faced with industrialisation, people exhibit fear, disbelief and denial. Evidence from other disciplines show us two types of issue arising from human interaction with industrialisation: reluctance to lose skills and over-reliance on the automation. Drawing on practical experience and research, this tutorial provides a much wider view of the human aspects of automation than is usual for the industry, and novel combinations of ideas such as TX for the UXD of automation, and adoption of studies from non-technical disciplines to help us understand how the human and the automation interact.
Delegates will be provided with an opportunity to identify and discuss problems and potential solutions to human factor problems around implementation of industrialised automation, and a number of practical ways to address teamwork and human problems in projects.

The methods presented are applicable to people in all forms of endeavour where change and specifically a move to automation/industrialisation is intended.

key points:

1. Implementing automation and industrialization involves human factors of teamwork and beyond teamwork;

2. Other disciplines and industries have lessons we should apply to the industrialisation of our own industry;

3. Models exist to help us understand how to work with rather than against people in and affected by our projects.

 

TUTORIAL Julian HartyUsing Mobile Analytics to Improve Testing and Development Practice

Abstract: Current practices for testing mobile apps are limited and flawed. Recent research indicates mobile analytics can augment and enhance both human-oriented and automated testing of mobile apps. This tutorial will help researchers learn about current state of the art and state of practice in industry and help practitioners discover practical ways to augment, adapt and refine their testing of mobile apps to make their work more fulfilling, with less waste, and also ultimately deliver improved releases of the apps they are responsible for.