Theoretical Computer Science
Theoretical Computer Science (TCS) uses mathematical and logical methods to understand the nature of computation and to solve fundamental problems arising through the everyday practical use of computer systems. TCS owes much of its success to Alan Turing (1912-1954), whom many consider the father of TCS, and to Kurt Gödel (1906-1978), the founder of modern logic.
Turing gave a precise definition of the notion of computation through his Turing machines and showed that there exist computing problems that no computer, however powerful, will ever be able to solve. Gödel showed that logical reasoning can be fully formalised and mechanised, but on the other hand, many properties of data structures and computing systems are inherently undecidable, that is neither provable nor disprovable.
Based on Turing's and Gödel's fundamental insights, which showed both the power and the limitations of computation and logic, TCS developed a wide variety of methods to prove the correctness of computing system and to discover software errors before they lead to catastrophic computer failures. TCS also assesses the inherent difficulty of computing problems, develops efficient new methods to solve hard computing problems and provides powerful methodologies for the development of correct and reliable software.
Theoretical Computer Science at Swansea:
Verification
We have expertise in several theorem provers such as Agda, B-Method, Coq, Event-B, Isabelle/HOL, Minlog, and Maude. We have developed special libraries for verifying software such as CSP-CASL, CSP-Prover, Timed CSP simulator, OnTrack, K-Android, CASL Consistency Checker, CSP-Agda, OOAgda. We have expertise in process algebras, especially CSP and CCS, and in various modal and temporal logics. We have been verifying railway interlocking systems with our industrial partner Siemens. This includes both traditional ladder logic based systems as well as the new ERTMS-ETCS which is based on railways without signals and distributed communication protocols. A special focus has been optimisation of railway interlocking systems, in order to increase the capacity of railways, possibly using minimal physical modifications. We have as well expertise in verification of computer hardware.
Type Theory
Types are used in programming to obtain programs with a higher degree of correctness, and help the compilers to compile programs more efficiently. Our main focus in type theory is on dependent type theory, in which types can be parametrized (depend) on elements of other types. This allows to describe the correctness of procedures by expressing the output of a procedure by a type which is parametrized by its inputs. For instance one can express that the output of a sorting function is a list which is sorted and has the same elements as its inputs. Another example is the property that after updating a railway interlocking system depending on requests by users, the new interlocking system fulfils safety criteria. Dependent types allow as well to describe more generic types. For instance, the type of handlers of a graphical user interface can be defined as a type which depends on the user interface itself. In Swansea we have special expertise in proof theory of Martin Laf Type Theory, interactive theorem proving using the type theoretic theorem provers Agda and Coq, coalgebras and coinduction. We have expertise in dependently typed programming, including writing verified graphical user interfaces, formalisations of process algebras in type theory, verification of railway interlocking systems using type theoretic theorem provers.
Blockchain technology
Blockchains, and in general Distributed Ledger Technologies, provide a new way to agree on trusted digital information amongst untrusted parties. The most well known application of this new technology are cryptocurrencies such as Bitcoin, that have been developed to replace the current system of storing bank accounts on a central server. The problem of the centralised server is that it forms a single point failure and that it can be manipulated without permission by the users. In cryptocurrencies bank accounts are stored in a distributed database and various consensus models (the main one being mining) have been developed to decide which database entries are correct. Beyond cryptocurrencies, blockchain can be used to store any data and create trust amongst untrusted users about its accuracy. Smart contracts can be used to automatically carry out transactions on a blockchain when certain conditions are met. Since the economic and financial consequences are huge, verification of protocols related to blockchains, cryptocurrencies and smart contracts is very important. We have developed a model of blockchain and are working on verifying smart contracts and other protocols in theorem provers. We are exploring more general ways of using blockchain technologies such as tracing the use of assets in the supply chain. This work is also part of http://www.swanseablockchainlab.com/.
Argumentation Theory
Initially studied in philosophy and law, argumentation has been researched extensively in AI and Computer Science in the last decade. Simply stated, argumentation focuses on disputes and debates, where parties argue for and against conclusions. Argumentation provides a powerful mechanism for dealing with inconsistent, incomplete information using defeasible/non-monotonic reasoning; it provides a means for representing and helping to resolve conflicts and differences of opinion amongst different parties. The mechanisms can formally treat common sense and everyday human reasoning patterns. As with logic, argumentation can generate explanations (proofs) for conclusions along with the process of computation. The explanatory power of argumentation has been explored in argumentation-based decision making and planning in both legal and medical settings. At Swansea University, there is expertise in argumentation theory, argumentation and natural language processing, argumentation schemes, and planning. Of particular interest is interdisciplinary research on argumentation in law with interactions between the Department of Computer Science and the School of Law.
Game Theory
Game theory is the mathematical theory of strategic interactions between agents. On the one hand, it forms the theoretical foundation of verification, as we can view a verification scenario as a game being played between the system and its environment. On the other hand, Theoretical Computer Science is crucial for the foundations of game theory, as it lets us take into account the algorithmic complexity of decision making for game theory.
Logic and Complexity
Computational complexity theory has its origin in logic. The fundamental goal of this area is to understand the limits of efficient computation (that is understanding the class of problems which can be solved quickly and with restricted resources) and the sources of intractability (that is what takes some problems inherently beyond the reach of such efficient solutions). The most famous open problem in the area is the P versus NP-problem, listed among the seven Clay Millenium Prize problems (see also the topic 'Satisfiability Solving'). Logic provides a multifarious toolbox of techniques to analyse questions like this, some of which promise to provide deep insights in the nature and limits of efficient computation. The theory group in Swansea focuses on logical descriptions of complexity, in particular propositional proof complexity and bounded arithmetic. In bounded arithmetic the major open problem is to prove strong independence results that would separate its levels. In propositional proof complexity the major open problem is to prove strong lower bounds for expressive propositional proof systems.
Cyber Security
Cyber security is an area of Computer Science that continually affects our everyday lives. It is a field that has a wide ranging background, from core fundamentals (e.g. cryptography) to social issues involved in social engineering (e.g. attacks that deceive human judgement). Here at Swansea, we take an interdisciplinary approach that cuts across research themes in Computer Science and out to wider fields such as Law. The theory group is particularly interested in rigorous analysis of the security of systems and are continually making contribution to the field, e.g., through modelling and verifying cryptocurrencies, working on models of privacy, considering mobile application security and formulating approaches to capture cyberterrorism.
Artificial Intelligence and Law
AI and Law is a research area within Artificial Intelligence. AI theory and tools are applied to legal information (e.g. legislation, case law, and contracts) and processes (e.g. consultations, court proceedings, negotiations, and compliance tracking). Legal information challenges AI theory and tools to address complex, unresolved issues (e.g. defeasibility, inconsistency, vagueness/ambiguity, interpretation, argumentation, systems modeling, linguistic complexity, norms, and case-based reasoning). As recognised by UK research funding organisations, developments in AI and Law will have fundamental, widespread, and long lasting impact at all levels of society. At Swansea University, there is a unique, interdisciplinary collaboration between the Department of Computer Science and the School of Law. In CS, there is expertise in logic, knowledge representation, machine learning, natural language processing, case-based reasoning, dialogue/discourse, chatbots, and social simulation. In Law, there is expertise in argumentation, case law, public and criminal law, legal advice, and vulnerable populations.
Natural Language Processing
Natural language processing (NLP) is a subfield of AI which focuses on automated processing and analysis of natural language, e.g. parsing, lexical semantics, semantic representation, discourse, and generation. Two main approaches are: machine learning, where texts are statistically analysed and classified with respect to sentiment, opinion, topic, and other types; knowledge representation and reasoning, where (logical) rules and knowledge structures are used to process, relate, extract, represent, and reason with the fine-grained information encoded in language. Dialogue/discourse and natural language generation are used to develop chatbots. At Swansea University, there is expertise in information extraction, controlled natural languages (a restrictive lexicon and grammar to translate natural language into a formal language for inference), automated analysis of arguments in natural language, NLP using machine learning, chatbots, and visualisation of large scale linguistic information. Of particular interest is NLP applied to legal texts. NLP relates to type theory (type-theoretic approaches to syntax and semantics), argumentation, logic and proof theory, other topics in AI where decision-support and reasoning is grounded in language.
Concurrency Theory
Concurrency theory represents the area of theoretical computer science which studies the decomposability properties of a program, algorithm, or problem into parts that can be safely executed (partially) out of order. Starting with Carl Adam Petri's seminal work on Petri nets in the early 1960s, a wide variety of formalisms have been developed for modelling and reasoning about concurrency. At Swansea University, our focus is mainly on process calculi, an algebraic formalism for expressing concurrent systems, and in particular connections to automata theory and modal and temporal logics, addressing issues revolving around the expressivity, decidability and complexity of process calculi and the logics for reasoning about them.
Educational, Historical and Philosophical Foundations of Computer Science
The influence of data, software and computation on the world is intensifying. The emergence of a dazzling range of digital technologies have transformed fundamentally many aspects of our political, economic, social and personal lives. Computer scientists are at the heart of these technologies and can have deep insights into how they affect and change the world: from theory, to policy and practice. Swansea computer scientists have made, and are making, technical contributions that are driving change, but they are also addressing Educational, Historical and Philosophical question. Education: What education in computing should we offer (i) to school and university students and teachers; (ii) to people working in businesses, public services and the professions, and (iii) to citizens? History and heritage: What are the causes of changes? What are the inventions and innovations that are key to our current technologies, ambitions and concerns? Philosophy: How do our technologies speak to classic, human philosophical problems of what we can know, what we can do, and who we are? [Link to cross-cutting page in online version]
Logic and Proof Theory
Human error is the main reason for fatal traffic accidents. Therefore, driverless cars are being developed to make driving safer. An even more dangerous activity than driving is programming. Since computer programs control aircrafts, nuclear power plants, medical devices and banking systems, bugs introduced by programmers can cause catastrophic loss of life and livelihood. In Swansea we are using methods from Logic and Proof Theory to automatically generate computer programs that are guaranteed to be free of errors. These methods have their roots in the foundational crisis of mathematics which was caused by logical paradoxes, for example, Russellâs paradox, and Goedelâs Incompleteness Theorem, which, in Computer Science terms, states that there is no logic that can prove all true statements about a computer system, even worse, no logic can prove of itself that it is free of contradiction. In their battle against incompleteness, logicians and computer scientists are developing logics that turned out to be also extremely useful to make computer software - and hence our lives -safer.
Logicians have also turned to working with non-monotonic and defeasible logics to better relate to common sense reasoning in a dynamic context. Creating knowledge-bases derived from human expertise is increasingly essential. In these respects, logic becomes more interdisciplinary, relating to AI and Law, NLP, and argumentation.
Computability Theory
Are there limits to what we can compute? Could there be ways of computing that are more powerful than others? Fundamental questions of this kind were the kindling for the flurry of activity in the 1930s, which is generally seen as the start of modern computer science and which eventually led to the invention of physical digital computers. The area is still concerned with fundamental questions about computations. As the topic has evolved, many intricate questions about computations have been posed, but fundamental issues remain such as computation over continuous data (such as exact real numbers and infinite streams) rather than discrete data. At Swansea we do research on the mathematical foundation of computation over continuous data and are developing verified software for efficient exact real number computation.
Satisfiability Solving
Satisfiability solving (SAT) is the algorithmic solution of a system of logical (boolean) equations. It has become a disruptive technology in the hardware industry, providing a powerful logical engine for proving correctness and verification. SAT is also more and more central for automated theorem proving (ATP). A recent success story was the solution of the Boolean Pythagorean Triples Problem, which produced the largest proof ever using intelligent cube-and-conquer strategies. And there are many other application areas, where large "logical puzzles" have to be solved (think "big Sudokus") such railway safety, argumentation, AI in general, and logic and complexity. In the form of the P vs NP problem, the complexity of SAT solving is one of the central topics of theoretical computer science. In Swansea we cover the whole area, from the theoretical foundations via open-source implementations to applications in industry and mathematics.