Monday (9 September)

Invited Talk #1 (chair: Bohuslav Křena)

Mauro Pezzè (USI Università della Svizzera Italiana): Automatic Verification of Human-Centric Cyber-Physical Systems

Human-centric cyber-physical systems (HCPhS) are systems where software, devices, machine learning and people seamlessly and endlessly interact with evolving goals, requirements and constraints. They are increasingly pervading  our life, and span from simple mobile and Web applications, like recommendation systems and virtual shops, to complex evolving systems, like autonomous vehicles and smart cities. In this talk, I will give a broad and visionary view of the emerging issues and opportunities in the verification of   human-centric cyber-physical systems. I will introduce the main features and survey the main open challenges of verifying human-centric cyber-physical systems. I will discuss scope and limitation of the most recent research results in software verification with particular focus on software testing; I will overview the ongoing partial but promising research activities in our USI-Star and UniMiB-LTA laboratories, and I will propose my vision of the most challenging open research issues.

Technical Session #1 (chair: Milan Češka)

Bettina Könighofer (TU Graz): Safe Learning via Shielding 

Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. In this talk, we discuss an approach to learn optimal policies while enforcing properties expressed in temporal logic. To this end, given a temporal logic specification, we synthesize a reactive system called a shield. The shield monitors the actions from the learner and corrects them only if the chosen action causes a violation of the specification.

Stefan Pranger (TU Graz): Optimal Learning via Shielding

Besides safety issues, learned controllers have further limitations like: (1) Learned controllers are monolithic and it is not possible to add new features without retraining. (2) When facing un-trained unexpected behaviour, the performance of learned controllers might be very poor or it can even result in complete system failure. In this talk, we address these issues by formalizing deviations from optimal controller performance using quantitative run-time measurements and synthesize quantitative shields that ensure both optimal performance and safe behaviour.

Jeremy Guijt (Radboud University Nijmegen): Probabilistic Shields: Shielded Decision Making in MDPs and POMDPs

Safety shields have shown success in deterministic settings, where an agent avoids safety violations altogether. However, in many cases this tight restriction limits the agent’s exploration and understanding of the environment, and policies satisfying the restrictions may not even exist. In this talk, we discuss probabilistic shields that enforce safety violations to occur only with small probability. As a modelling formalism, we will discuss in the talk 1) probabilistic shields based on Markov decision processes (MDPs) and 2) probabilistic shields based on partially observable MDPs (POMDPs).

Masoud Ebrahimi (TU Graz): Learning a Behavior Model of Hybrid Systems Through Combining Model-Based Testing and Machine Learning

Models play an essential role in the design process of cyber-physical systems. They form the basis for simulation and analysis and help in identifying design problems as early as possible. However, the construction of models that comprise physical and digital behavior is challenging. Therefore, there is considerable interest in learning such hybrid behavior by means of machine learning which requires sufficient and representative training data covering the behavior of the physical system adequately. In this work, we exploit a combination of automata learning and model-based testing to generate sufficient training data fully automatically. Experimental results on a platooning scenario show that recurrent neural networks learned with this data achieved significantly better results com-pared to models learned from randomly generated data. In particular, the classification error for crash detection is reduced by a factor of five and a similar F1-score is obtained with up to three orders of magnitude fewer training samples.

Chencheng Liang (Uppsala University): Guiding interpolation selection for model checking by using machine learning techniques

Counter Example Guided Abstract Refinement (CEGAR) is a powerful framework for software model checking. Interpolation is one method to guide the abstraction in every iteration. To get proper interpolations we usually query theorem solvers, but theorem solver cannot always answer the interpolation that we expected. Therefore, we modify the interpolation queries to theorem solvers and hopefully, they can give better answers (good interpolations) compared with the original interpolation queries. We use the technique called interpolation abstraction to form the new queries. The new queries could be generated manually or automatically. However, these queries are not perfect (redundant), we want to further refine these queries to tell which query is useful and which is useless. Here is where we can use machine learning techniques. Now, we have some queries generated manually be our training data. We will first perform binary classification to tell which query is useful or not. We are going to explore this binary classification problem in text level and graph level. In the text embedding, we tried Doc2vec technique to embed the program and the corresponding queries. In graph level embedding we are trying to describe the program (transformed to horn clauses) and the corresponding queries to be a graph, then use graph embedding techniques (e.g. graph neural networks) to deal with them.

Niveditha Manjunath (AIT Austrian Institute of Technology): Automatic Failure Explanation in CPS Models

Debugging Cyber-Physical System (CPS) models can be extremely complex. Indeed, only detection of a failure is insufficient to know how to correct a faulty model. Faults can propagate in time and in space producing observable misbehaviours in locations completely different from the location of the fault. Understanding the reason of an observed failure is typically a challenging and laborious task left to the experience and domain knowledge of the designers.

In this paper, we propose CPSDebug, a novel approach that combines testing, specication mining, and failure analysis, to automatically explain failures in Simulink/Stateflow models. We evaluate CPSDebug on two case studies, involving two use scenarios and several classes of faults, demonstrating the potential value of our approach.

Miroslav Stankovič (TU Wien): Automatic Analysis for Prob-Solvable Loops

One of the main challenges in the analysis of probabilistic programs is to compute invariant properties that summarise loop behaviours. Automation of invariant generation is still at its infancy and most of the times targets only expected values of the program variables, which is insufficient to recover the full probabilistic program behaviour. We present a method to automatically generate moment-based invariants of a subclass of probabilistic programs, called Prob-Solvable loops, with polynomial assignments over random variables and parametrised distributions. We combine methods from symbolic summation and statistics to derive invariants as valid properties over higher-order moments, such as expected values or variances, of program variables.

Anja Karl (TU Graz): Small Faults Grow Up — A Verification Method for Countermeasures Against Fault Injection Attacks

A typical assumption when writing software is that registers and memory content do not change unless the software performs a write operation on these locations. However, in practice, so-called fault injection attacks are quite common. An attacker tries to gain invalid access, reveal secret keys, or trigger system failures by manipulating a program’s execution. A prominent example is the rowhammer attack, where an adversary injects faults in the form of bit flips in memory. To protect a system against attacks through injected bit flips, developers often use error detection codes. In theory, these codes can guarantee the detection of injected bit flips up to a specific bound. However, in practice, the implementations of these codes are often faulty and limit the capabilities of error detection. In my talk, I will present you an example of such an implementation error, and how we use model checking to identify and resolve these problems.

Invited Talk #2 + Technical Session #2 (chair: Tomáš Vojnar)

Nikos Gorogiannis (Facebook & Middlesex University): Concurrency bug detection at scale with Infer

Concurrency bugs are notoriously difficult to reason about in program verification. I will present two static analyses deployed at Facebook for detecting (a) data races and (b) deadlocks, both for Java code. These analyses have low-FP rates as a core design goal, and are implemented in the open-source Infer analyser. I will discuss the design trade-offs of these analyses, our experience from deploying them in production, as well as related theoretical results. 

Nicolas Jeannerod (IRIF, Université de Paris): Symbolic Execution of Debian Packages

We attempt to analyse the effects a Debian package installation/removal/etc. has on the file system. These packages are made of POSIX shell scripts. Our tool executes these scripts symbolically, computing a description of the transformation they apply on the file system. This representation relies on previously existing Feature Trees logics designed to describe incomplete knowledge of a tree (here, the file system). These logics have been augmented with predicates that describe updates of these trees.

We present our work on the symbolic engine executing shell scripts, and our results on the Feature Trees logics, including an efficient implementation of a satisfiability solver. We show how our tool already allows us to detect execution pathes (in simple packages) that lead to inconsistent states of the Debian system, and how we can extract human understandable descriptions of bugs found in this way.

Our future goal is to extend our tool to support more packages and to use the computed representations to check interesting properties (e.g. the removal is the inverse of the installation, the installation does not modify anything in /home, etc.).

This work is part of the ongoing CoLiS project which aims at applying formal methods to the quality assessment of Debian packages. This is joint work with Benedikt Becker, Claude Marché, Mihaela Sighireanu, Yann Régis-Gianas and Ralf Treinen from our research teams in Saclay and Paris.

Thomas Lemberger (LMU Munich): Conditional Testing: Off-the-Shelf Combination of Test-Case Generators

There are several powerful automatic testers available, each with different strengths and weaknesses. To immediately benefit from different strengths of different tools, we need to investigate ways for quick and easy combination of techniques. Until now, research has mostly investigated integrated combinations, which require extra implementation effort. In contrast, we propose the concept of ‘conditional testing’, which requires no additional implementation effort. Conditional testing passes the test goals that a first tester has covered to the next tester, so that the next tester can focus on the remaining test goals. Our combinations do not require changes to the implementation of a tester, because we leverage a testability transformation. This way, testers can be combined ‘off the shelf’. To show the potential and flexibility of conditional testing, we present multiple, different combinations of testers.

Technical Session #3 (chair: Javier Esparza)

Stefan Ratschan (ICS, Czech Academy of Sciences): Formal Software Specification From A Different Angle

Despite decades of research, formal software specification is still far from mainstream. While teaching related subjects to master students, I experienced that even students that manage to correctly use highly abstract programming language constructs, often fail to get simple, down-to-earth specifications right. Apparently, the problem with formal specifications is not abstraction, but the fact that they do not allow the students to execute and test them, to play with modifications, and so on. There have, of course, been efforts at overcoming this problem using executable specifications. In the talk, I will discuss some humble first experiments with an alternative approach in this direction.

Bernhard Kragl (IST Austria): Inductive Sequentialization of Asynchronous Programs

Asynchronous programs are notoriously difficult to reason about because they spawn computation tasks which take effect asynchronously in a non-deterministic way. This complexity can be amplified even more by the execution platform, for instance, when programs execute over an unreliable network where messages can be reordered, dropped, or duplicated. Devising inductive invariants for such programs requires apprehending complex relationships between an unbounded number of computation tasks in arbitrarily long executions.

In this paper, we propose a proof methodology, called inductive sequentialization, that sidesteps this complexity. This methodology provides a novel induction argument for the verification of an asynchronous program, and supports establishing a sequential reduction, i.e., a sequential program which captures every behavior of the original program, up to reordering of commutative actions. The essence of this methodology is a proof rule with the same name, that combines commutativity arguments, induction, and abstraction in order to establish the soundness of a given sequential reduction. We have successfully applied this methodology to a diverse set of message-passing protocols, including leader election protocols, Two-Phase Commit, and Paxos. The sequential reductions we obtain correspond to executions of these protocols in an idealized synchronous environment where processes act in a fixed order and at the same speed. These are generally the simplest executions to reason about.

Florian Zuleger (TU Wien): The Polynomial Complexity of Vector Addition Systems with States

Vector addition systems are an important model in theoretical computer science and have been used in a variety of areas. In this talk, we consider vector addition systems with states over a parameterized initial configuration. For these systems, we are interested in the standard notion of computational complexity, i.e., we want to understand the length of the longest trace for a fixed vector addition system with states depending on the size of the initial configuration. We show that the asymptotic complexity of a given vector addition system with states is either Θ(Nᵏ) for some computable integer k, where N is the size of the initial configuration, or at least exponential. We further show that k can be computed in polynomial time in the size of the considered vector addition system. Finally, we show that 1 ≤ k ≤ 2ⁿ, where n is the dimension of the considered vector addition system.

Thanh-Hai Tran (TU Wien): TLA+ Model Checking Made Symbolic

TLA+ is a language for formal specification of all kinds of computer systems. System designers use this language to specify concurrent, distributed, and fault-tolerant protocols, which are traditionally presented in pseudo-code. TLA+ is extremely concise yet expressive: The language primitives include Booleans, integers, functions, tuples, records, sequences, and sets thereof, which can be also nested. In this talk, we present APALACHE — a first symbolic model checker for TLA+. On the assumptions that all specification parameters are fixed and all states are finite structures, APALACHE translates the underlying transition relation into quantifier-free SMT constraints, which allows us to exploit the power of SMT solvers. Designing this translation is the central challenge that we address in the development of our model checker. Our experiments show that APALACHE outperforms the explicit-state model checker TLC on benchmarks with large state spaces.

Vlastimil Dort (Charles University): Reference Immutability for Dotty and DOT

Reference immutability type systems allow declaring certain object references in a program as read-only, and guarantee that an object accessed through such reference cannot be mutated. There are implementations of such type systems for statically typed object-oriented programming languages such as Java or C#, using annotation processors or language extensions. In our work in progress, we explore the possibility of encoding reference immutability in the type system of Scala, with an implementation in the new Dotty compiler, and a formal definition based on the DOT calculus.

Martin Blicha (Charles University): Decomposing Farkas Interpolants

Modern verification commonly models software with Boolean logic and a system of linear inequalities over reals and over-approximates the reachable states of the model with Craig interpolation to obtain, for example, candidates for inductive invariants. Interpolants for the linear system can be efficiently constructed from a Simplex refutation by applying the Farkas’ lemma. However, this approach results in the worst case in incompleteness, since Farkas interpolants do not always suit the verification task. We introduce decomposed interpolants, a fundamental extension of the Farkas interpolants obtained by identifying and separating independent components from the interpolant structure using methods from linear algebra. We also present experimental results demonstrating the usefulness of decomposed interpolants.

Sebastian Wolff (TU Braunschweig): Decoupling lock-free data structures from memory reclamation for static analysis

Verification of concurrent data structures is one of the most challenging tasks in software verification. The topic has received considerable attention over the course of the last decade. Nevertheless, human-driven techniques remain cumbersome and notoriously difficult while automated approaches suffer from limited applicability. The main obstacle for automation is the complexity of concurrent data structures. This is particularly true in the absence of garbage collection. The intricacy of lock-free memory management paired with the complexity of concurrent data structures makes automated verification prohibitive.

In this work we present a method for verifying concurrent data structures and their memory management separately. We suggest two simpler verification tasks that imply the correctness of the data structure. The first task establishes an over-approximation of the reclamation behavior of the memory management. The second task exploits this over-approximation to verify the data structure without the need to consider the implementation of the memory management itself. To make the resulting verification tasks tractable for automated techniques, we establish a second result. We show that a verification tool needs to consider only executions where a single memory location is reused. We implemented our approach and were able to verify linearizability of Michael & Scott’s queue and the DGLM queue for both hazard pointers and epoch-based reclamation. To the best of our knowledge, we are the first to verify such implementations fully automatically.

Tuesday (10 September)

Invited Talk #3 (chair: Jan Strejček)

Javier Esparza (TU Munich): Population protocols: A Case Study in Parameterized Verification of Liveness Properties of Probabilistic Distributed Algorithms

Many distributed algorithms should be correct independently of the number of processes executing them. Examples include algorithms for mutual exclusion, leader election, distributed termination, distributed consensus, reliable broadcast, etc. We say that these systems are parametric in the number of processes, and call them parameterized systems.

When the number of processes is fixed, many parameterized systems have a finite state space, and their correctness can be automatically analyzed using e.g. model-checking techniques. The challenge of parameterized verification is to automatically prove correctness for any number of processes. It amounts to proving that each member of an infinite family of finite-state systems is correct.

Population protocols are a model of computation very much studied by the distributed systems community. They are used to describe and analyze algorithms able to run in arbitrary networks of unknown topology. The goal of a protocol is that all processes learn some information about the initial configuration of the network. More precisely, in a protocol with initial states q_1, …, q_n, all processes should learn whether P(x_1, …, x_n) holds, where P is a given predicate, and x_i denotes the initial number of agents in q_i.

Population protocols are a great testbed for parameterized verification techniques. In the talk I will introduce them, discuss their expressive power, and analyze the theory and practice of proving them correct.

Technical Session #4 (chair: Mauro Pezzè)

Frank Schüssele (University of Freiburg): Trace Abstraction with Maximal Causality Reduction

We are interested in the verification of concurrent programs with trace abstraction. Trace abstraction takes a trace (i.e. a sequence of actions) leading to the error location and checks it for feasibility. If the trace is infeasible, we generalize it to an automaton and subtract these traces from the program graph. This way we continue until we cannot get any more error traces. If that is the case, we have proven that the program cannot reach its error location.

Concurrent programs consist of different threads where at any point of time a statement of an arbitrary thread is executed. For the verification of concurrent programs we need to consider all these interleaving executions. This might lead to a problem, as there are usually many possible executions of the different threads. However some of them might be equivalent, i.e. their execution leads to the same program state. Trace abstraction usually needs to analyse various equivalent traces and therefore needs several redundant iterations. For that reason we try to reduce the number of analysed traces using maximal causal reduction to improve trace abstraction on concurrent programs.

Maximal causality reduction (MCR) is a technique to reduce the number of visited traces. We start with one trace and generate new non-equivalent permutations of that trace, i.e. traces with the same actions in a different order. Here two traces are equivalent if all read accesses of variables read the same value. MCR encodes the constraints of a trace in a formula which is called maximum causal model. This formula is then checked for satisfiability by a SMT-solver. A solution of this formula corresponds to a new trace.

Dominik Klumpp (University of Freiburg): Automated Control Flow Reconstruction from Assembler Programs

Many standard software analyses require the control flow graph (CFG) of the analysed software as input. This poses no problem for analyses based on the source code. Sometimes however, an analysis must be based on a compiled binary, if for instance the source code is unavailable or lacks necessary low-level information. Thus, a CFG has to be constructed from the binary program if standard analyses are to be applied.

Control flow analysis of binary code is a non-trivial task, due to the presence of indirect branch instructions that transfer control to a dynamically computed location. In this talk, I will present two quality criteria for CFGs of binary programs: soundness and precision. These criteria are intended to enable sound, precise and efficient software analyses based on the CFG. The automated verification technique Trace Abstraction Refinement is adapted to iteratively construct and refine CFGs of binary programs. The constructed CFGs are provably sound, and I will give a sufficient condition for precision.

Kaled Alshmrany (University of Manchester): Finding Vulnerabilities in Network Protocols Using Fuzzing and Symbolic Execution

Implementations of network protocols are often prone to vulnerabilities caused by developers mistakes when accessing memory regions and dealing with arithmetic operations. However, finding effective approaches for testing network protocol security has proven to be a difficult problem. The main reason is that the state space of complex protocol software is too large to be explored. Here we propose a novel approach that combines fuzzing with symbolic execution to verify intricate properties in network protocols. For the fuzzing technique, we use it for an initial exploration of the network protocol while symbolic execution explores both software paths and protocol states, which were not covered by fuzzing. From this combination, we automatically generate high-coverage test input packets for a network protocol implementation. Additionally, we survey various approaches based on fuzzing and symbolic execution to understand how these techniques can be efficiently combined. In our preliminary study, we used the software verifiers ESBMC, Map2check, and KLEE and the fuzzing tool SPIKE. Our experimental results show that ESBMC can be further developed to detect security vulnerabilities efficiently and effectively in network protocols. 

Juraj Major (Masaryk University): LTL3TELA: a translator of LTL to small omega-automata

We present a new tool called LTL3TELA that translates LTL formulae to Emerson-Lei automata, which are ω-automata with acceptance condition consisting of logical combinations of Büchi and co-Büchi terms. LTL3TELA uses a novel translation procedure combined with external translator and various optimisations to produce deterministic or nondeterministic automata that are usually smaller than automata built by other tools.

Vojtěch Havlena (Brno University of Technology): Automata Terms in a Lazy WSkS Decision Procedure

We propose a lazy decision procedure for the logic WSkS. It builds a term-based symbolic representation of the state space of the tree automaton (TA) constructed by the classical WSkS decision procedure. The classical decision procedure transforms the symbolic representation into a TA via a bottom-up traversal and then tests its language non-emptiness, which corresponds to satisfiability of the formula. On the other hand, we start evaluating the representation from the top, construct the state space on the fly, and utilize opportunities to prune away parts of the state space irrelevant to the language emptiness test. In order to do so, we needed to extend the notion of language terms (denoting language derivatives) used in our previous procedure for the linear fragment of the logic (the so-called WS1S) into automata terms. We implemented our decision procedure and identified classes of formulae on which our prototype implementation is significantly faster than the classical procedure implemented in the MONA tool.

Wednesday (11 September)

Industrial Talk #1 + Technical Session #5 (chair: Nikos Gorogiannis)

Kamil Dudka (Red Hat): Static Analysis and Formal Verification at Red Hat

Red Hat uses static analyzers to automatically find bugs in the source code of Red Hat Enterprise Linux.  The distribution consists of approx. 3000 RPM packages and 300 million lines of code.  Red Hat develops an open source tool that can statically analyze this amount of software in a fully automatic way. We give it source RPM packages of our choice and get the results of selected static analyzers in a unified machine-readable format.  This talk will cover which static analyzers are used by Red Hat and how their results are handled. Red Hat is now also experimenting with formal verifiers Symbiotic and Divine, which are developed by research groups of Masaryk University.  Is there any chance to integrate such tools into Red Hat’s static analysis workflow?

Gidon Ernst (LMU Munich): SecCSL: Security Concurrent Separation Logic

We present SecCSL, a concurrent separation logic for proving expressive, data-dependent information flow security properties of low-level programs. SecCSL is considerably more expressive, while being simpler, than recent compositional information flow logics that cannot reason about pointers, arrays etc. To capture security concerns, SecCSL adopts a relational semantics for its assertions. At the same time it inherits the structure of traditional concurrent separation logics; thus SecCSL reasoning can be automated via symbolic execution. We demonstrate this by implementing SecC, an automatic verifier for a subset of the C programming language, which we apply to a range of benchmarks. Joint work with Toby Murray (University of Melbourne).

Jens Katelaan (TU Wien): On Magic Wands and Bi-Abduction: Deciding Expressive Separation Logics

I present ongoing work on extending decidability results for symbolic-heap separation logic to separation logic with the separating implication (also known as magic wand). I explain how to use these result to solve the (bi-)abduction problem. The bi-abduction problem is at the heart of compositional program analyses based on separation logic, as implemented, for example, in Facebook’s Infer tool.

Industrial Talk #2 + Technical Session #6 (chair: Roderick Bloem)

Jan Fiedor (Honeywell): Verification of Safety-critical Systems at Honeywell

Compared to traditional software development, development of safety-critical systems is different. Beside the source code, we also have the requirements, the model(s) of the system, the blueprints of the hardware components, etc. We also have to follow strict certification guidelines when proving that the final product fulfills the given requirements. Honeywell develops a service that can verify both the requirements and the source code, finding as much problems as possible early by analyzing the requirements and keeping the amount of problems to be found later when verifying the source code to a minimum. This talk will cover some of the approaches that can be used to detect defects in the requirements and the source code, the types of defects in the requirements we are interested in finding, and how to handle various kinds of data we need to work with (requirements, models, source code).

Heinz Riener (EPFL): SAT-based Exact Synthesis

Exact synthesis is a versatile synthesis technique to find solutions to a synthesis problem that meet some parameters of the problem exactly. For instance, one may want to synthesize a circuit realization of a Boolean function with exactly k gates or with exactly l levels. In this talk, I give an overview of state-of-the-art methods for SAT-based exact synthesis.

Emily Yu (JKU Linz): QBF-based Bounded Model Checking for Temporal Epistemic Logic

During the talk, the speaker will present a framework for verifying and certifying model checking problems for multi-agent systems, and introduce a tool MCMAS-qbf which extends the existing model checker MCMAS and supports automated translation of bounded model checking problems against CTLK specification described in ISPL format into Quantified Boolean Formulas.

Tanja Schindler (University of Freiburg): Finding Conflict and Unit Instances via E-Matching

First-order logic with quantifiers is undecidable in general, but some expressive fragments have complete instantiation. This means, it is sufficient to instantiate the quantified formulas with a finite set of ground terms computed from the formula, and then solve the resulting quantifier-free conjunction with an SMT solver. A challenge is to select the relevant instances in order to avoid producing too many new formulas that slow down the solver.

In this talk we will present a new approach that treats quantified formulas in SMT solvers in the style of a theory solver in the DPLL(T) framework. This comprises methods to detect instances of quantified formulas that are in conflict with the current boolean assignment of the ground literals or lead to a ground propagation. In particular, we will discuss how E-matching, the most common heuristic method for quantifier instantiation in SMT solvers, can be used to find these specific instances.

Jaroslav Bendik (Masaryk University): Minimal Inductive Validity Cores

Symbolic model checkers can construct proofs of safety properties over complex models, but when a proof succeeds, the results do not generally provide much insight to the user. Minimal Inductive Validity Cores (MIVCs) trace a property to a minimal set of model elements necessary for constructing a proof, and can help to explain why a property is true of a model. In addition, the traceability information provided by MIVCs can be used to perform a variety of engineering analysis such as coverage analysis, robustness analysis, and vacuity detection. The more MIVCs are identified, the more precisely such analyses can be performed. In this talk, we describe the concept of MIVCs and also present an algorithm for a complete online enumeration of MIVCS. 

Jiří Matyáš (Brno University of Technology): Adaptive Verifiability-Driven Strategy for Evolutionary Approximation of Arithmetic Circuits

We present a novel approach for designing complex approximate arithmetic circuits that trade correctness for power consumption and play important role in many energy-aware applications. Our approach integrates in a unique way formal methods providing formal guarantees on the approximation error into an evolutionary circuit optimisation algorithm. The key idea is to employ a novel adaptive search strategy that drives the evolution towards promptly verifiable approximate circuits. As demonstrated in an extensive experimental evaluation including several structurally different arithmetic circuits and target precisions, the search strategy provides superior scalability and robustness with respect to various approximation scenarios. Our approach significantly improves capabilities of the existing methods and paves a way towards an automated design process of provably-correct circuit approximations. The result is a joint work with: M. Ceska jr., V. Mrazek, L. Sekanina, Z. Vasicek, and T. Vojnar.

Pavol Vargovčík (Brno University of Technology): Tuning MiniSAT

Boolean satisfiability (SAT) solvers are in the core of various sophisticated applications ranging from time schedule composition through DNA synthesis to formal verification of software. Despite the NP completeness of the satisfiability problem, the performance of the solvers has been significantly improved by recent research, enabling their use for real world problems. However, many tools still spend great most of the time solving boolean satisfiability. This performance issue can be lowered by better generation of input formulae and also by modifying the internals of the SAT solver, adjusting it for the specific use case. The presentation introduces common SAT solving techniques and shows our specialization of the MiniSAT solver to efficiently solve SAT in automata model checking.