The Computational Complexity of Some Logical Theories

in #steemstem6 years ago

I'm reading a book from 1979 by Jeanne Ferrante and Charles W. Rackoff: "The Computational Complexity of Logical Theories." Though it's now nearly 40 years old, it still gets cited often, so I thought I'd give it a try after finding it referenced in a couple articles on non-elementary recursion and super-exponential complexity bounds. You can view a sample on Google Books; it's typed, rather than typeset.

Depending on how into the book I get, I might try to explain some of the results in later discussions.

The topic of the book is decision problems for statements in first-order logic, over different types of theories or structures. We know that sufficiently complicated first-order logic systems are undecidable. But more restrictive structures are decidable, but very, very hard. A sentence in first-order logic is "decidable" if an algorithm can determine whether it is true or not, that is, if all possible models (obeying the axioms) satisfy the sentence.

A table early in the book lists some new results of the authors', and other results known at the time. I have translated these into the big-O notation or little-o notation, but note that the lower bounds are for *some *constant "c", not all constants "c".

TheoryUpper Bound (known to be in this class)Lower Bound (not in this class)
1-1 unary functionsNTIME( 2^O(n^2) )NTIME( 2^(cn) )
1-1 unary functions with a monadic predicateNTIME( 2^(2^O(n)) )NTIME( 2^(2^(cn)) )
one successorDSPACE( n^2 )NSPACE( o(n) )
one successor with a monadic predicateNTIME( 2^(2^O(n)))NTIME( 2^(2^(cn)) )
natural numbers with < orderingDSPACE( n^2 )NSPACE( o(n) )
two successors with equal-length predicateDSPACE( 2^O(n) )NTIME( 2^(cn) )
well-orderingDSPACE( n^3 )NSPACE( o(n) )
lexicographic orderDSPACE( n^3 )NSPACE( o(n) )
integers with +, ≤, 0DSPACE( 2^(2^O(n)) )NTIME( 2^(2^(cn)) )
real numbers with +, ≤, 0DSPACE( 2^O(n) )NTIME( 2^(cn) )
nonnegative numbers with multiplicationDSPACE( 2^(2^(2^O(n))) )NTIME( 2^(2^(2^(cn))) )
finite abelian groupsDSPACE( 2^(2^(2^O(n))) )NTIME( 2^(2^(cn)) )
any nonempty collection of paring functionsN( 2 ↑↑ cn ) = NTIME( 2 raised to itself cn times )

Notation: DSPACE is "space consumed by a deterministic Turing machine", while NSPACE is the same for nondeterministic Turing machines. DTIME and NTIME are the corresponding definitions for time used by a deterministic or nondeterministic Turing machine.

The bounds which are o(n) mean any order of growth less than "n" does not work, such as n^0.99 or n / log n.

Most of practical complexity theory sits at EXPTIME = DTIME( 2^(n^k) ) or below. Here we're messing around with doubly-exponential time bounds, and space bounds that are even worse. It's sort of crazy that such simple theories are powerful enough that their decision procedures are so computationally expensive.

Examples of first-order logic in these theories

"First-order logic" means that the quantifiers, the "for all" or "the exists" in a sentence, are over individual objects. This contrasts with second-order logic in which we can quantify over sets or relations as well, or propositional logic which uses no quantifiers.

"1-1 unary functions": all the objects are functions which take one argument, and are injective. That means that if f(x) = f(y), then x = y. Functions can be combined using composition, which is sometimes written as ○. A statement in this logic might be "for all f and g, fg = gf", which is false: function composition is not, in general, commutative.

"monadic predicate": this is just fancy logician-speak for a condition or property that has only one argument. So, for example, we might add the predicate EVEN. Some functions are even and some are not. So in this theory a statement like "for all f and all g, EVEN(f) implies EVEN(gf)."

"one successor": the successor function is a unary function that "adds one", or moves an element to the next in a sequence. So, a sentence in this theory might be "for all x, there exists y such that y = S(x)".

"two successors": I'm a little less clear on this one. I think if we embed all our elements into two different linear orders, then we have one successor operation for each orders. You can think of this as a street grid (integer lattice) in which one successor moves you east (+1 in the x direction) and one moves you north (+1 in the y direction).

"finite abelian group": a group has an operator, an identity under that operator, and inverses with respect to that operator. So think of addition on the integers: + is the operator, 0 is the identity, the the inverse of x is -x. "Abelian" means "commutative" so that a + b = b + a. And, "finite" just means that there group has a number of elements defined by a natural number, not an infinite group. An example sentence in this theory might be "For all x, there exists some element y such that y + x + (-y) = x and not y = 0".

Further reading:

Pressburger arithmetic, the theory of the natural numbers with addition. (This was proved decidable in 1929, long before other results on computational complexity!)

List of first-order theories -- Wikipedia

Decidability of a theory -- Wikipedia

Complexity classes: DSPACE, NSPACE, DTIME, NTIME, 2-EXPTIME

Sort:  

I have also been doing some reading around theories and algorithms lately. Although I have not read the same stuff as you, but this makes it more interesting to read your posts

Coin Marketplace

STEEM 0.27
TRX 0.21
JST 0.038
BTC 95687.43
ETH 3622.07
SBD 3.85