Languages and Tools for Next Generation Testing (Canceled)LANGETI'19
Every year, new ideas arise in the areas of programming and software development to facilitate new capabilities for software systems. However, other aspects associated with the development of software systems is overlooked. With this in mind, the Languages and Tools for Next Generation Testing (LANGETI) workshop intends to serve as an open discussion platform for practitioners and researchers to share their ideas and the result of their work around the topic of automated testing for next generation and out-of-this-world applications. In particular, we encourage participation of contributions of programming languages and tools tailored for new technologies, or new application domains.
With the LANGETI’19 workshop we want to disrupt the current state of workshops as passive presentation attending venues to a more dynamic and involved work sessions. As a consequence, we device the workshop sessions between two moments. First, the workshop will provide a space to share high quality level research ideas around the subject of testing languages and tools for next generation platforms and systems. Second, we will have hands-on sessions for authors (or other attendants submitting a tool extended abstract) to offer a practical overview of their tool. These sessions, will be structured as a short tutorial, where attendants to the presentation will have the opportunity to download, use, and test the tool in their own machines, following a guided example.
Call for Papers
The workshop invites submissions of either new or visionary work in any of these perspectives individually, or focusing on a cross-fertilization between several of these topics. An explicit goal of the workshop is to confront the different programming approaches to live adaptation, and to understand how they can learn from each other or how they need to be combined to provide a more comprehensive solution to the problem of live software adaptation. Therefore, contributions to the workshop should ideally include, in addition to a detailed discussion of the proposed approach (for example by defining the models, mechanisms, or testing strategies underlying that approach), a discussion of how the proposed approach on software testing could influence or be influenced by other approaches and perspectives.
We are also interested in more tool-oriented submissions, where participants would be able to present and demo their testing solutions. The topics of interest of the workshop include, but are not limited to:
- Domain specific automated testing: mobile, web, micro-services, IoT, context-driven, etc.
- Chaos engineering
- Metamorphic testing
- Automated testing of quality attributes (performance, security, usability, etc.)
- Game/Play testing
- Mutation testing
- Model-based testing
- Artificial Intelligence for Software Testing
- Empirical studies
- Experience/Industry reports
Papers must be written in English, provided as PDF documents, and follow the new ACM Conference Format (10 point font, Times New Roman font family, numeric citation style). Papers’ length can be up to 6 pages. Submission will be managed through the EasyChair Submission system.
Publication
Papers will undergo a peer-review process. Accepted papers will be published as pre-proceedings, part of ACM’s Digital Library.
Review
Submissions will undergo standard peer reviewing by at least three members of the program committee. Shepherding will be a possibility after notification.