Software Engineering and Programming Languages
Explore our Research Topics
Ali Mili |
Research Areas: Software engineering, software testing, program correctness, program repair Functional Extraction Despite several decades of research and technology transfer, the development of reliable and secure software remains an unfulfilled goal: software is routinely fielded with known failures but unknown faults. Many of the problems of software engineering stem from the fact that programmers develop and reuse software products whose function they do not fully comprehend. The purpose of the functional extraction project is to develop and evolve an automated tool that computes the function of a program by static analysis of its source code in a C-like language. We achieve this goal by mapping a program onto an equation between its inputs and outputs, which we can then solve or reason about to analyze correctness, safety, security, reliability, etc. The main innovation of this project is the ability to compute the function of loops, which we achieve by means of invariant relations. Theoretical Foundations for Program Repair To repair a program does not mean to make it correct; it only means to make it more-correct, in some sense, than it was. In this project we introduce a definition of relative correctness, a reflexive transitive partial ordering between candidate programs that culminates in absolute correctness with respect to a reference specification, and we show how this concept can be used in program repair to: define what is a fault; define what is a unitary/atomic fault; define what it means to repair a fault; measure the degree of faultiness of a program; and perform program repair in a stepwise manner, by removing one unitary fault at a time until the program is absolutely correct. We show empirically that by augmenting existing program repair tools with our capabilities, we enhance their performance and their scope. Mutation Testing Mutation testing is the art of generating mutants of a base program and is used primarily to assess the quality of test suites: a test suite is all the better that it can distinguish the base program from its non-equivalent mutants. We estimate the ratio of equivalent mutants of a base program by analyzing the amount of redundancy in the program. Also, we analyze the effectiveness of a mutant set by its ability to vet test suites that expose the failures of the base program; and we model the minimization of a test suite as an optimization problem, where the objective function is the cardinality of the set and the constraint under which the optimization is sought is the preservation of the mutant set effectiveness.
|
|
Martin Kellogg
|
Research Areas: Verification, automated reasoning, software engineering, programming languages Inferring Type Qualifiers Pluggable type systems enable programmers to enhance the ability of a programming language’s type system to automatically prevent bugs by attaching a type qualifier to each type in the program: each pluggable type system prevents one kind of bug. These techniques have been shown to be effective, but can be difficult to apply to legacy systems because of the need to write type qualifiers. We have developed a novel type inference system that is general, meaning that it applies to any pluggable type system without needing to modify the pluggable type system itself, making the techniques more accessible to programmers. Correlating Understandability and Verifiability Intuitively, code that is simple for a human to understand ought to be simple to automatically verify as free of bugs. We have developed an experimental methodology to test for this correlation in real code and applied it to a corpus of programs used in prior studies of how humans understand code and found evidence for this intuition. Our findings have implications for how we build and use verification tools: when automated tools fail to verify code, it does make sense to refactor that code to make it simpler, because code that is simpler for the verifier to understand is more likely to be simple for other programmers to understand as well.
|
|
Iulian Neamtiu |
Research Areas: Programming languages, software engineering and their applications to reliable AI, smartphones, security Improving Smartphone Reliability and Security We have developed a wide range of approaches to improve smartphone reliability, including static and dynamic program analyses, record-and-replay systems, application self-healing and automatic test generators. These approaches were released as open-source code and have exposed reliability issues in many popular applications including medical mobile apps used in acute care settings. Verifying Unsupervised Learning Implementations Unsupervised Learning (UL) is widely used, by itself or as part of AI pipelines, in scenarios ranging from everyday applications to high-stake domains. However, even specifying UL correctness is a challenge, which complicates verifying or validating UL implementations. Our past work has exposed serious issues (e.g., nondeterminism, inconsistency) in widely popular UL toolkits and on critical, e.g., medical datasets. Our current work is focused on automated verification and fault localization for UL implementations using static and dynamic analysis. |