Analysis

In Computer Science, models come in all shapes and sizes: UML domain models, ORM or ER data models, rewriting systems, ontologies, and many more.  Depending on the modelling language and method, we can perform many useful tasks on these models, ranging from error detection to turning them into executable software in their own right.

A computer simulation is an approach to model a hypothetical situation such that it can be used to study how the real system works. One application of simulation that we are currently working on is to simulate vessel movements and cargo transport in the North Sea for the oil and gas companies. The goal is to improve both on the utilisation of vessel capacity and on the timely delivery of material. We develop a modelling language ABS to simulate such complex and concurrent processes. Another application is that we are developing a methodology to simulate and analyse geological processes in order to assist geologists for the prospect evaluation.

 

Model checking is a verification technique that explores all possible system states in a brute-force manner. Similar to a computer chess program that checks possible moves, a model checker, the software tool that performs the model checking, examines all possible system scenarios in a systematic manner. In this way, it can be shown that a given system model truly satisfies a certain property. We apply model checking in the domain of software product line to verify the correctness of feature models evolved through time.

 

Deductive software verification is the process of turning the correctness of a program into a mathematical statement, called verification conditions, and then discharge them by using either automated or interactive theorem provers. We develop deductive software verification techniques to verify functional correctness of concurrent and distributed programs. 

 

Static program analysis is the analysis of computer software that is performed without actually executing programs, in contrast with dynamic analysis, which is analysis performed on programs while they are executing. We are developing novel static analysis techniques for better data locality in parallel processing. Other growing commercial use of static analysis can be found in the field of verifying safety-critical computer systems and locating potentially vulnerable code. 

 

With information formalized in a computer readable logic, we can make machines execute rule-based logical deductions to reveal and make use of knowledge that is implicit in the available information. This is what is known as automated reasoning and has many highly useful applications.

 

When applied to pure mathematics or logic, it is called theorem proving, and can be used to derive new theorems, generate proofs and explanations, or verify existing proofs. The automated deduction can also be applied to formalizations of human knowledge, that is, an ontology, to aid in data integration, verification and intelligent query answering, together realizing the potential of intelligent data.

 

Finally, the automation of reasoning can be applied to constraints over data or knowledge, to verify that the information has the correct shape and content when scaled to human-impenetrable size and complexity.

 

We apply and develop algorithms and techniques for automated reasoning in applications that solve complex real-world problems: In the Optique project we used automated reasoning for data integration and query rewriting across multiple large data sources with Statoil and Siemens as industry partners. Automated reasoning also has a central role in the GeoAssistant project, where a reasoner is used to generate explanations of answers about geological information. In a more theoretical project, we have developed a novel rule language for general deductions on the ontological level, known as GBoxes.

By Crystal Chang Din, Leif Harald Karlsen, Martin Giese
Published Aug. 6, 2019 3:11 PM - Last modified Aug. 9, 2019 2:24 PM