The following workshops will be held in conjunction with ICPE 2017 (alphabetically listed):
- ACPROSS: 1st International Workshop on Autonomous Control for Performance and Reliability Trade-offs in Internet of Services
- ENERGY-SIM: 3rd International Workshop on Energy-aware Simulation
- LTB: 6th International Workshop on Load Testing and Benchmarking of Software Systems
- MoLS: 1st International Workshop on Monitoring in Large-Scale Software Systems
- PABS: 3rd International Workshop on Performance Analysis of Big Data Systems - QUDOS: 3rd International Workshop on Quality-aware DevOps
- WEPPE: 1st Workshop on Education and Practice of Performance Engineering
- WOSP-C: 3rd Workshop on Challenges in Performance Methods for Software Development
The paradigm shift from an information-oriented Internet into an Internet of Services (IoS) has opened up virtually unbounded possibilities for creating and deploying new services. In this reshaped ICT landscape, critical software systems, which include
IoT, cloud services, softwarized networks, etc., essentially consist of large-scale service chains, combining and integrating the functionalities of (possibly huge) numbers of other services offered by third parties.
In this context, one of the most challenging problems is the provision of service chains which offer an adequate level of performance, reliability and quality perceived by the end users, based on a large offering of multiple functionally equivalent
services that not only differ in terms of performance and reliability characteristics, but might also exhibit a churn rate and time varying behavior. Motivated by this, the aim of this workshop is to create a follow-up of European network of experts,
from both academia and industry, aiming at the development of autonomous control methods and algorithms for discussing a high quality, well-performing and reliable IoS. Some keywords of the workshop include: autonomous control, performance, Quality of
Experience, Quality of Service, reliability, elasticity, softwarized networks and pricing.
The workshop is supposed to share new findings, exchange ideas, discuss research challenges and report latest research efforts that cover a variety of topics including, but not limited to:
- Autonomous control in Internet of Things, distributed clouds, multi-clouds and softwarized networks
- Cross-layer Quality of Experience (QoE) and Quality of Service (QoS) management
- Methods and models for resilient Internet of Services
- Modeling and performance tools for the Internet of Services
- QoS and QoE modeling and monitoring for the Internet of Services
- User-centric, context-aware QoE monitoring and network management for the Internet of Services
- QoE in the context of time-aware applications, computers and communication systems of the Internet of Services
- QoS-, elasticity-, energy- and price-aware resource selection for the Internet of Services
- Scheduling, resource management, admission control for the Internet of Services
- Autonomous control, modeling, resilience, resource management, and admission control for software performance and reliability trade-offs
We call for original and unpublished papers describing research results, experience, visions or new initiatives. Submitted manuscripts should not exceed 6 pages double column, including figures, tables, references and appendices, and should be in the
standard ACM format for conference proceedings (see formatting templates for details: http://www.acm.org/publications/proceedings-template).
At least one author of each accepted paper is required to attend the workshop and present the paper. Presented papers will be included in the ICPE 2017 proceedings companion volume that will be published by ACM and included in the ACM Digital Library.
- Valeria Cardellini, University of Rome Tor Vergata, Italy
- Francesco Lo Presti, University of Rome Tor Vergata, Italy
- Peter Pocta, University of Zilina, Slovak Republic
- Hans van den Berg, University of Twente, Netherlands
- Emiliano Casalicchio, Blekinge Institute of Technology, Sweden
- Valerio Di Valerio, Sapienza University of Rome, Italy
- Tihana Galinac Grbac, University of Rijeka, Croatia
- Yoram Haddad, Jerusalem College of Technology, Israel
- Tobias Hossfeld, University Duisburg-Essen, Germany
- Stefano Iannucci, Mississippi State University
- Andreas Kassler, Karlstad University, Sweden
- Attila Kertesz, University of Szeged, Hungary
- Steven Latre, University of Antwerp, Belgium
- Pasi Lassila, Aalto University, Finland
- Philipp Leitner, University of Zurich, Switzerland
- Hugh Melvin, National University of Ireland Galway, Ireland
- Barbara Pernici, Politecnico of Milan, Italy
- Raimund Schatz, Austrian Institute of Technology, Austria
- Martin Varela, VTT Technical Research Centre of Finland, Finland
- Andrej Zgank, University of Maribor, Slovenia
- Thomas Zinner, University of Wurzburg, Germany
Publicity and Web Chair:
- Matteo Nardelli, University of Rome Tor Vergata, Italy
Abstract submission deadline: January 3, 2017
Paper submission deadline: January 10, 2017
Notification of acceptance: February 8, 2017
Camera-ready submission: February 17, 2017
Workshop date: April 23, 2017
The energy impact of IT infrastructures is a significant resource issue for many organisations. The Natural Resources Defence Council estimates that US data centers alone consumed 91 billion kilowatt-hours of electrical energy in 2013 – enough to power
the households of New York twice-over – and this is estimated to grow to 139 billion kilowatt-hours by 2020. However, this is an underestimation as this figure fails to take into account other countries and all other computer usage. There are calls for
reducing computer energy consumption to bring it in line with the amount of work being performed – so-called energy proportional computing. In order to achieve this we need to understand both where the energy is being consumed within a system and how
modifications to such systems will affect the functionality (such as QoS) and the energy consumption. Monitoring and changing a live system is often not a practical solution. There are cost implications in doing so, and it normally requires significant
time in order to fully ascertain the long-term trends. There is also the risk that any changes could lead to detrimental impacts, either in terms of the functionality of the system or in the energy consumed. This can lead to a situation where it is
considered too risky to perform anything other than the most minor tweaks to a system. The use of modelling and simulation provides an alternative approach to evaluating where energy is being consumed, and assessing the impact of changes to the system.
It also offers the potential for much faster turn-around and feedback, along with the ability to evaluate the impact of many different options simultaneously.
ENERGY-SIM 2017 seeks original work that is focused on addressing new research and development challenges, developing new techniques, and providing case studies, related to energy-aware simulation and modelling.
Specific topics of interest to ENERGY-SIM 2017 include, but are not limited to, the following:
- Simulation/Modelling for energy reduction
- Estimation of energy consumption
- Evaluation of techniques to reduce consumption
- Simulation/Modelling for smart-grids
- Simulation/Modelling of micro- and macro-level energy generation and supply
- Simulation/Modelling of smart-grid deployments
- Simulation/Modelling of energy in computer systems
- Data centre simulation/modelling
- Individual component simulation/modelling
- Multi-scale system simulation/modelling
- Simulation/Modelling for energy in the Internet of Things
- Simulation/Modelling of Internet of Things systems including battery operated systems
- Simulation/Modelling of energy scavenging approaches
- Performance and validation of energy-aware simulations and modelling
- Benchmarking and analytical results
- Empirical studies
- Theoretical foundations of energy-aware simulation/modelling
- Theoretical models for energy-aware simulation/modelling
- Energy-aware simulation/modelling packages and tools
- Energy-aware simulation/modelling packages under development from the community
Papers will be accepted in one of two formats:
- Short work in progress/position papers, up to 4 pages in length
- Full papers, up to 6 pages in length
Papers describing significant research contributions of theoretical and/or practical nature are being solicited for submission. Authors are invited to submit original, high-quality papers presenting new research related to energy-aware simulations.
The papers that are accepted and presented at the workshop will be published by ACM and disseminated through the ACM Digital Library. It is intended that the best papers will be put forward for a Journal special edition post workshop.
- Matthew Forshaw, Newcastle University, UK
- Erol Gelenbe, Imperial College London, UK
- Helen Karatza, Aristotle University of Thessaloniki, Greece
- Ibad Kureshi, Durham University, UK
- Mehrgan Mostowfi, University of Northern Colorado, USA
- Omer Rana, Cardiff University, UK
- Nigel Thomas, Newcastle University, UK
- Ananta Tiwari, San Diego Supercomputer Center, USA
- Stephen McGough, Durham University, UK
Technical paper submission deadline: January 10, 2017
Technical paper notification of acceptance: February 1, 2017
Presentation submission deadline: March 20, 2017
Presentation notification of acceptance: March 24, 2017
Camera-ready submission: February 19, 2017
Workshop date: April 23, 2017
Software systems (e.g., smartphone apps, desktop applications, e-commerce systems, IoT infrastructures, big data systems, and enterprise systems, etc.) have strict requirements on software performance. Failure to meet these requirements will cause customer dissatisfaction and negative news coverage. In addition to conventional functional testing, the performance of these systems must be verified through load testing or benchmarking to ensure quality service. Load testing examines the behavior of a system by simulating hundreds or thousands of users performing tasks at the same time. Benchmarking evaluates a system's performance and allows to optimize system configurations or compare the system with similar systems in the domain.
Load testing and benchmarking software systems are difficult tasks, which requires a great understanding of the system under test and customer behavior. Practitioners face many challenges such as tooling (choosing and implementing the testing tools), environments (software and hardware setup) and time (limited time to design, test, and analyze). This oneday workshop brings together software testing researchers, practitioners and tool developers to discuss the challenges and opportunities of conducting research on load testing and benchmarking software systems.
We solicit the following two tracks of submissions: technical papers (maximum 4 pages) and presentation track for industry or experience talks (maximum 700 words extended abstract). Technical papers should follow the standard ACM SIG proceedings format and need to be submitted electronically via EasyChair at https://easychair.org/conferences/?conf=lt2017.
Extended abstracts need to be submitted as “abstract only” submissions via EasyChair. Accepted technical papers will be published in the ICPE 2017 Proceedings. Materials from the presentation track will not be published in the ICPE 2017 proceedings, but will be made available on the workshop website. Submitted papers can be research papers, position papers, case studies or experience reports addressing issues including but not limited to the following:
- Efficient and cost-effective test executions
- Rapid and scalable analysis of the measurement results
- Case studies and experience reports on load testing and benchmarking
- Load testing and benchmarking on emerging systems (e.g., adaptive/autonomic
systems, big data batch and stream processing systems, and cloud services)
- Load testing and benchmarking in the context of agile software development
- Using performance models to support load testing and benchmarking
- Building and maintaining load testing and benchmarking as a service
- Efficient test data management for load testing and benchmarking
- Bram Adams, Polytechnique Montreal, Canada
- Cor-Paul Bezemer, Queen's University, Canada
- Andreas Brunner, RETIT GmbH, Germany
- Christoph Csallner, University of Texas at Arlington, USA
- Holger, Eicrhelberger, University of Hildesheim, Germany
- Greg Franks, Carleton University, Canada
- Vahid Garousi, Hacettepe University, Turkey
- Shadi Ghaith, IBM, Ireland
- Wilhelm Hasselbring, Kiel University, Germany
- Robert Heinrich, Karlsruher Institute of Technology, Germany
- André van Horn, University of Stuttgart, Germany
- Robert Horror, EMC Isilon, USA
- Pooyan Jamshidi, Imperial College London, United Kingdom
- Diwakar Krishnamurthy, University of Calgary, Germany
- Alexander Podelko, Oracle, USA
- Weiyi Shang, Concordia University, Canada
- Gerson Sunyé, University of Nantes, France
- Johannes Kroß, fortiss GmbH, Germany
- Zhen Ming (Jack) Jiang, York University, Canada
- Ahmed E. Hassan, Queen’s University, Canada
- Marin Litoiu, York University, Canada
- Zhen Ming (Jack) Jiang, York University, Canada
Large-scale, heterogeneous software systems such as cyber-physical systems, cloud-based systems, or service-based systems are ubiquitous in many domains. Often, such systems are systems of systems working together to fulfill common goals resulting from
domain or customer requirements. Such systems' full behavior emerges only at runtime, when the involved systems interact with each other, with hardware, third-party systems, or legacy systems. Thus, engineers are interested in monitoring the overall
system at runtime ñ for instance, to verify components' correct timing or measure performance and resource consumption. Monitoring can happen continuously at runtime, to give instant feedback on behavior violations, or post hoc, on the basis of recorded
event traces and data logs.
The complexity and heterogeneity of large-scale software systems, however, complicates runtime monitoring. Properties must be checked across the boundaries of multiple constituent systems and heterogeneous, domain-specific technologies must be
instrumented. Also, different types of properties must be checked at runtime, including global invariants and exceptions, range checks of variables, temporal constraints on events, or architectural rules constraining component interactions. Finally,
systems exist in many different versions and variants owing to the continuous, independent evolution of their constituent systems. Adaption and reconfiguration of monitoring solutions thus becomes essential.
This workshop aims to explore and explicate the current status and ongoing work on monitoring in large-scale software systems and the transfer of knowledge between different disciplines and domains. A particular goal of the workshop is thus to bring
together different communities working on monitoring approaches such as (application) performance and resource monitoring, requirements monitoring, and runtime verification. The workshop also aims at bringing together researchers and practitioners to
discuss practical problems vs. potential solutions.
Future directions for research will be outlined based on challenges discussed and the needs expressed by the participants. The workshop thus has the following objectives: (i) Discuss how monitoring can be supported in large-scale software systems (state
of the art). (ii) Discuss open challenges in research and practice.
We particularly encourage research papers based on industrial experience and empirical studies, as well as papers, which identify and structure open challenges and research questions. We are interested in all topics related to monitoring in large-scale
soft-ware systems (systems of systems, cloud-based systems, service-oriented systems, big-data systems, business processes, cyber-physical systems, automation production/automotive systems, etc.).
This includes, but is not limited to:
- Requirements monitoring in large-scale systems
- Runtime verification of large-scale systems
- Complex event processing in large-scale systems
- Performance monitoring in large-scale systems
- Monitoring along the continuum, from the data source to fog- and cloud-based solutions
- Supporting monitoring in large-scale systems with...
- Requirements as runtime entities
- Models at runtime
- Runtime constraint checking
- Visualization techniques
- Human-computer interaction for operators
- Monitoring as a means to support evo-lution and/or maintenance in large-scale systems
- Monitoring in the context of DevOps/ continuous deployment/integration
- Tools and frameworks for monitoring large-scale systems
- Luciano Baresi, Polimi, Milano, Italy
- Thomas Bures, Charles University Prague, Czech Republic
- Jane Cleland-Huang, University of Notre Dame, IN, USA
- Mads Dam, KTH/CSC, Stockholm, Sweden
- Alexey Fishkin, Siemens CT, Munich, Germany
- Paul Grünbacher, JKU Linz, Austria
- Andre van Hoorn, University of Stuttgart, Germany
- Christian Inzinger, University of Zürich, Switzerland
- Andrea Janes, Free University of Bozen-Bolzano, Italy
- Falko Kötter, Fraunhofer IAO, Stuttgart, Germany
- Patrick Mäder, TU Ilmenau, Germany
- Olivier Perrin, Nancy 2 University, France
- Klaus Schmid, University of Hildesheim, Germany
- Jocelyn Simmonds, University of Chile, Chile
- Oleg Sokolsky, University of Pennsylvania, USA
- Uwe Zdun, University of Vienna, Austria
... and the workshop organizers
- Rick Rabiser (JKU Linz, Austria)
- Michael Vierhauser (JKU Linz, Austria)
- Sam Guinea (Polimi, Italy)
- Wilhelm Hasselbring (CAU Kiel, Germany)
The workshop on performance analysis of big data systems (PABS) aims at providing a platform for scientific researchers, academicians and practitioners to discuss techniques, models, benchmarks, tools, case studies and experiences while dealing with
performance issues in traditional and big data systems. The primary objective is to discuss performance bottlenecks and improvements during big data analysis using different paradigms, architectures and big data technologies. We propose to use this
platform as an opportunity to discuss systems, architectures, tools, and optimization algorithms that are parallel in nature and hence make use of advancements to improve the system performance. This workshop shall focus on the performance challenges
imposed by big data systems and on the different state-of-the-art solutions proposed to overcome these challenges. The accepted papers shall be published in ACM proceedings and digital library.
All novel performance analysis or prediction techniques, benchmarks, architectures, models and tools for data-intensive computing system for optimizing application performance on cutting-edge high performance solutions are of interest to the workshop.
Examples of topics include but not limited to:
- Performance analysis and optimization of Big data systems and technologies. - Big data analytics using machine learning
- In-memory analysis of big data
- Performance Assured migration of traditional systems to Big data platforms - Deployment of Big Data technology/application on High performance computing architectures.
- Case studies/ Benchmarks to optimize/evaluate performance of Big data applications/systems and Big data workload characterizations.
- Tools or models to identify performance bottlenecks and /or predict performance metrics in Big data
- Performance analysis while querying, visualization and processing of large network datasets on clusters of multicore, many core processors, and accelerators.
- Performance issues in heterogeneous computing for Big data architectures.
- Analysis of Big data applications in science, engineering, finance, business, healthcare and telecommunication etc.
- Data structure and algorithms for performance optimizations in Big data systems.
- Data intensive computing
- Tools for big data analytics and management
Submissions describing original, unpublished recent results related to the workshop theme, upto 6 pages in standard ACM format can be submitted through the easychair conference system, following this link: https://easychair.org/conferences/?conf=pabs17.
More information on ACM format is available on ICPE 2017 web page. Accepted technical papers will be published in the ICPE 2016 Proceedings. In case of any difficulty please contact email@example.com or firstname.lastname@example.org. The submissions must be in
- Amy Apon, Clemson University, USA
- Dhableshwar Panda, Ohio State University , USA
- Gautam Shroff, TCS Innovation Lab, India
- Kishor Trivedi, Duke University, USA
- Rajesh Mansharamani, Consultant, India
- Jeff Ullman, Stanford University and Gradiance, USA
- Saumil Merchant, Shell, India
- Sebastien Goasguen, Citrix, Switzerland
- Steven J Stuart, Clemson University, USA
- Vikram Narayana, George Washington University, USA
- Tilmann Rabl, DIMA, Toronto, Canada
DevOps has emerged in recent years as a set of principles and practices
for smoothing out the gap between development and operations, thus
enabling faster release cycles for complex IT services. Common tools and methods used in DevOps include infrastructure as code, continuous
deployment, automated testing, continuous integration, and new
architectural styles such as microservices.
As of today, software engineering research has mainly explored these
problems from a functional perspective, trying to increase the benefits
and generality of these methods for the end users. However, this has left behind the definition of methods and tools for DevOps to assess, predict,
and verify quality dimensions.
The QUDOS workshop focuses on the problem of how to best define and
integrate quality assurance methods and tools in DevOps. Quality covers
a broadly-defined set of dimensions including functional correctness, performance, reliability, safety, survivability, and cost of ownership,
among others. To answer this question, the QUDOS workshop wants to bring together experts from academia and industry working in areas such as
quality assurance, testing, performance engineering, agile software engineering, and model-based development. The goal is to identify and disseminate novel quality-aware approaches to DevOps.
Topics of interest include, but are not limited to:
- Foundations of quality assurance in DevOps:
Methodologies; integration with lifecycle management; automated
tool chains; architecture patterns; etc.
- Architectural issues in DevOps:
Scalability and capacity planning; scale-out architectures;
cloud-native application design; microservice-based architectures
- Quality assurance in the development phase:
Software models and requirements in early software development
phases; functional and non-functional testing; languages,
annotations and profiles for quality assurance; quality analysis,
verification and prediction; optimization-based architecture design;
- Quality assurance during operation:
Application performance monitoring; model-driven performance
measurement and benchmarking; feedback-based quality assurance;
capacity planning and forecasting; architectural improvements;
performance anti-pattern detection; traceability and versioning;
trace and log analysis; software regression and testing; performance
monitoring and analytics; etc.
- Continuous deployment and live experimentation:
CI and CD in DevOps; canary releases and partial rollouts; A/B
testing; performance and scalability testing via shadow launches
- Applications of DevOps:
Case Studies in cloud computing, Big Data, and IoT; standardization
and interoperability; novel application domains, etc.
- All other topics related to quality in DevOps and agile service
Authors are invited to submit original, unpublished papers that are not
being considered in another forum. We solicit full papers (max
6 pages) and short tool papers (max 2 pages). All submissions must
conform to the ACM conference format. Each full paper submission will
be reviewed by at least three members of the program committee.