The boolean satisfiability problem

The boolean satisfiability problem is very popular in computer science. The reason for its popularity is simple: There are many, many times when one wants to see if a given boolean formula has a solution. Some examples of these times are:

It also has some interesting computer theoretical properties, which we’ll talk about some more later in the post.

To start, let me define the problem.

Definition

Let \(x_1, \dots, x_m\) be boolean variables. We define an literal to be either \(x_i\) or \(\lnot x_i\), for \(1 \le i \le m\). We define a clause to be the joining of some number of literals, by a \(\lor\), the “logical or”, surrounded by parenthesis. That is, a clause is \((e_1 \lor \dots \lor e_k)\), with \(e_i\) a literal, for \(1 \le i \le k\). Finally, let us define a formula as joining some number of clauses with \(\land\), the “logical and”. That is, a formula is \(c_1 \land \dots \land c_n\), where \(c_i\) is a clause, for \(1 \le i \le n\).

We can now define the boolean satisfiability problem: Given a formula \(S\) with boolean variables \(X = \{x_1, \dots, x_m\}\), decide if there exists a function \(f : X \to \{true, false\}\), such that when one replaces \(x_i\) by \(f(x_i)\) in \(S\), the resulting boolean sentence is logically true. We note this as \(f(S) = true\), and say that \(S\) is satisfiable.

This may seem like it applies only to a very restricted subset of boolean propositions; however, any boolean proposition can be reduced to one of these formulas. This is the conjunctive normal form of a boolean proposition. Thus, for any boolean proposition \(S\) you have, there exists another proposition in conjunctive normal form \(S'\), such that for any assignment of variables \(f\) you choose, \(f(S)\) is true if and only if \(f(S')\) is true. Herein lies one important aspect of SAT.

Examples

Some instances of SAT could be whether the following formulas are satisfiable:

Note that, using de Morgan’s laws and basic boolean algebra, any formula such as \(p \implies q\) or \(x \veebar y\) can be reduced to a series of \(\land\) and \(\lor\). Thus if we have the proposition \((x \lor w) \implies (y \land z)\), we can reduce it to the equivalent SAT formula \((\lnot w \lor y) \land (\lnot w \lor z) \land (\lnot x \lor y) \land (\lnot x \lor z)\).

Restrictions

One thing we can do, however, is to restrict ourselves to a certain subset of SAT problems. A popular one is \(k\)-SAT, which is the collection of SAT instances, where each clause has exactly \(k\) literals. A very interesting thing happens when we do this. In terms of complexity classes, and measuring the time with respect to the number of clauses:

This seems interesting. The first two problems are easy (and, indeed, can be solved in linear time in the number of clauses). Now as soon as we add a single extra literal to our clauses, the problem is now in NP-complete. This seems quite drastic - we went from “very easy” to “very hard” in a single extra literal in the clauses!

Indeed, SAT is the poster child for the NP-complete class of problems. It was the first one to be proven to be in the class, by Stephen Arthur Cook, in 1971. Virtually every single problem that has since been proven to be NP-hard, has used this result, by reducing itself either to SAT, or to something that is itself reducible to SAT, always considering polynomial-time reductions. For more information on this, see the article on NP-complete on Wikipedia.

1-SAT

First, we’ll show how 1-SAT can be solved in polynomial time.

It is easy to see how 1-SAT can be solved in polynomial time: There exists assignment of \(x_1, \dots, x_m\) that satisfies a 1-SAT formula \(S\), if and only if \(x_i\) and \(\lnot x_i\) do not both appear in \(S\). If there are \(n\) clauses, there are at most \(n\) variables. Thus we can create an array \(V\) of \(n\) integers, all starting at -1. When I see the clause with the single variable \(x_i\), I check \(V[i]\): If it is 0, the formula is unsatisfiable. Else, I set \(V[i]\) to 1. When I see \(\lnot x_i\), on the other hand, I check \(V[i]\): If it is 1, the formula is unsatisfiable. Else, I set \(V[i]\) to 0. If finish walking through the clauses and have not returned false, I return true, meaning the formula is satisfiable. Namely, the satisfying assignments are:

\[ f(x_i) = \begin{cases} true & \text{ if } V[i] == 1\\ false & \text{ if } V[i] == 0\\ \text{either }true\text{ or }false & \text{otherwise} \end{cases} \]

The proof of this is easy, I recommend you try it if you’re wanting easy correctness proofs.

The proof for 3-SAT isn’t as easy.

3-SAT

The proof of 3SAT being in NP-complete is by polynomial time reduction of the general SAT to 3-SAT. Since SAT is in NP, clearly 3-SAT must also be in NP, because it is just a special case of it. Now it remains to be seen why 3-SAT is in NP-hard.

Let’s say we have an instance of SAT, \(S\). Our clauses are \(c_1, \dots, c_n\). Let’s take the \(i\)th clause, \(c_i\), and suppose it has \(k\) literals in it. As a reminder, SAT isn’t \(k\)-SAT for any \(k\), it simply does not place restrictions in the length of its clauses. Our objective is to build an equivalent 3-SAT formula \(S'\), such that any assignment of variables \(f\) of \(S'\), also assigns satisfying values to all variables in \(S\), meaning \(f(S) = true\). We will do this by examining each clause \(c_i\), and finding equivalent 3-SAT clauses.

If \(k = 2\), and we want to convert it to a 3-literal clause, suppose we have \(x_i \lor x_j\) (we do not lose generality by assuming they’re not negated). We can replace this clause by another one, which has exactly 3 literals in it, but is equivalent: \[(x_i \lor x_j \lor u) \land (x_i \lor x_j \lor \lnot u)\]

where \(u\) is a new variable.

Any assignment of \(x_i, x_j\), and \(u\) that satisfies these two clauses will necessarily satisfy the clause \((x_i, x_j)\), and viceversa. Thus, the original \(c_i\) and these two clauses, when taken together, are equivalent.

Now let’s take the case \(k=1\). In this case, we can apply the same trick: the clause \((x_i)\) is equivalent to \[(x_i \lor u \lor v) \land (x_i \lor \lnot u \lor v) \land (x_i \lor u \lor \lnot v) \land (x_i \lor \lnot u \lor \lnot v)\]

where \(u\) and \(v\) are new variables. It is easy to see that these 4 clauses, when taken together, are also equivalent to \(c_i\).

Now we are left with the case of \(k > 3\), since if \(k = 3\), the clause is already in 3-SAT form. Again, we assume without loss of generality that our clause is of the form \((x_{i, 1} \lor \dots \lor x_{i, k})\). Let us consider the following clauses:

\[c'_i = (x_{i, 1} \lor x_{i, 2} \lor u_1) \land (\lnot u_1 \lor x_{i, 3} \lor u_2) \] \[\land (\lnot u_2 \lor x_{i, 4} \lor u_3) \land (\lnot u_3 \lor x_{i, 5} \lor u_4) \land\ \dots\] \[\land\ (\lnot u_{k-4} \lor x_{i, k-2} \lor u_{k-3}) \land (\lnot u_{k-3} \lor x_{i, k-1} \lor x_{i, k})\]

Where \(u_{i, 1}, \dots, u_{i, k-3}\) are new variables. We will now see how these clauses are equivalent to \(c_i\):

Let’s assume \(c_i\) was satisfiable. Then we want to show \(c'_i\) is also satisfiable. Let us say we had a satisfying assignment \(f\) for \(c_i\). We may assume it exists because \(c_i\) is satisfiable. Then at least one of the \(x_i\) are true in \(f\). For our new assignment \(f'\), we set \(f'(x_{i, t}) = f(x_{i, t})\ \forall\ 1 \le t \le k\), and we see:

  • If \(x_{i, 1}\) or \(x_{i, 2}\) were assigned true values in \(f\), then our new assignment has that one as true, and every extra variable as false. The first of the clauses of \(c'_i\) will be true because \(x_{i, 1}\) or \(x_{i, 2}\) is true, and all the other clauses of \(c'_i\) will have their first value be true.

  • If \(x_{i, k-1}\) or \(x_{i, k}\) is true in \(f\), then our new assignment has that one as true, and every extra variable true. For the last clause of \(c'_i\), it is true because either \(x_{i, k-1}\) or \(x_{i, k}\) is true. For all the other clauses of \(c'_i\), the third value will be true.

  • Finally, let us assume that \(f(x_{i, s})\) is true, with \(s \notin \{1, 2, k-1, k\}\). Then set \(f'(u_{i, t}) = true\ \forall\ 1 \le t \le s-2\), \(f'(u_{i, t}) = false\ \forall\ s-1 \le t \le k\), and \(f'(x_{i, s}) = true\). Then, all the clauses before the \(s\)th one are true because their third value is true, and all the clauses after the \(s\)th one is true because their third value is true. The \(s\)th clause is true because \(f'(x_{i, s})\) is true. Thus, all of the clauses in \(c'_i\) are satisfied.

Thus, if \(c_i\) is satisfiable by \(f\), then \(c'_i\) is satisfiable by \(f'\). Now, we want to prove that if \(c'_i\) is satisfiable, then \(c_i\) is satisfiable. This is easy to see by induction:

The last clause of ours is satisfied, so if neither \(x_{i, k-1}\) nor \(x_{i, k}\) are true, then \(\lnot u_{k-3}\) is true. But then \(u_{k-3}\) is false, so since the one-before-last clause was satisfied, it wasn’t by its \(u_{k-3}\) term. So either \(\lnot u_{k-4}\) or \(x_{i, k-2}\) is true. If \(x_{i, k-2}\) is not true, then \(u_{k-4}\) is false. And so on. We reach the bottom this way, assuming \(u_4\) has to be false in order for the following clauses to have been satisfied (which they are, by assumption that \(c'_i\) is satisfiable). Then \((\lnot u_2 \lor x_{i, 4} \lor u_3)\) was not satisfied by \(u_3\), so if it wasn’t satisfied by \(x_{i, 4}\), then it was satisfied by \(\lnot u_2\), so \(u_2 = false\). But then \((\lnot u_1 \lor x_{i, 3} \lor u_2)\) was not satisfied by \(u_2\), so if it wasn’t satisfied by \(x_{i, 3}\), then it was satisfied by \(\lnot u_1\). But then \(u_1 = false\). Then \((x_{i, 1} \lor x_{i, 2} \lor u_1)\) wasn’t satisfied by \(u_1\), so either \(x_{i, 1}\) or \(x_{i, 2}\) is true. Then, one of the \(x_{i, \alpha}\) must be true for some \(1 \le \alpha \le k\), so \(c_i\) is satisfiable.

If you didn’t follow that proof, the gist of it is that if \(f'(c'_i) = true\), then \(f'\), restricted to the variables \(x_{i, 1}, \dots, x_{i, k}\), also satisfies \(c_i\), so \(c_i\) is satisfiable.

Then, \(c_i\) and \(c'_i\) are equivalent. We have added \(k-3\) clauses. \(k\) is the number of terms in the original clause, which is at most \(m\), the number of variables in \(C\). This number is, itself, smaller than or equal to \(n\), the number of clauses in \(C\). Thus we have added fewer than or equal to \(n\) clauses to the original formula.

Since at every point we have added at most \(n\) clauses, at the end, we will have added \(n \times n\) clauses at most, and thus our formula \(S'\) has at most \(n^2\) clauses (which is polynomially more than \(n\)), and is equivalent to \(S\). Thus, if we could calculate 3-SAT for \(S'\) in polynomial time, we could calculate SAT for \(S\). Thus we have a polynomial reduction of 3-SAT to SAT, which we know to be in NP-hard. This proves that 3-SAT is in NP-hard.

Thus, since 3-SAT is in NP and in NP-hard, it is in NP-complete by definition.

Phew. :)

2-SAT

Now we come to the edge. If we add one variable, this problem becomes NP-complete. If we remove one variable, the problem is extremely easy. What happens in the \(k=2\) case? As I hinted at before, it is also easy. Not extremely easy, but nonetheless it is solidly in P.

To prove this, let us first introduce a useful tool. Given a 2-SAT formula \(S = c_1 \land c_2 \land \dots \land c_n\), using \(k\) variables, we can construct the following directed graph \(G = (V, E)\):

\[V = \{x_1, \lnot x_1, x_2, \lnot x_2, \dots, x_k, \lnot x_k\}\] \[E = \{(\lnot A, B), (\lnot B, A) \mid (A \lor B) \in S\}\]

In here, \(A\) and \(B\) stand for literals. This graph \(G\) is called the implication graph of \(S\), and it is easy to see why: if we interpret \(A \lor B\), it is equivalent to both \(\lnot A \implies B\) and \(\lnot B \implies A\). The edges in \(G\) are these implications. Thus, our goal of satisfying \(S\) is the same as the goal of assigning values to each of the variables with a function \(f\), such that, if there’s an edge \((A, B)\), then either \(f(A) = false\), or \(f(B) = true\).

First, an example. Say we have the 2-SAT formula \((a \lor b) \land (a \lor \lnot c) \land (b \lor c)\). The corresponding implication graph is

Let us now note something. Suppose there is a directed path from some variable \(x_i\) to \(\lnot x_i\) in \(G\). Then, if \(f\) is a satisfying assignment of values, \(f(x_i)\) cannot possibly be true, because when we set \(x_i\) to true, all of its neighbors must also be true in \(f\), and their neighbors must also be true, and so on. Eventually, we get that if \(f(x_i) = true\), then \(f(\lnot x_i) = true\), which is impossible, because \(f\) was an assignment of values, and thus it should be the case that \(f(\lnot x_i) = \lnot f(x_i)\).

The same obviously applies if there’s a path from \(\lnot x_i\) to \(x_i\).

Thus, if we ever find an \(x_i\) such that there’s a directed path (which we’ll indicate with \(\leadsto\)) \(x_i \leadsto \lnot x_i\), and there’s also a directed path \(\lnot x_i \leadsto x_i\), then \(S\) cannot be consistent, because there couldn’t possibly be an assignment \(f\) that satisfied it, since \(f(x_i)\) could neither be \(true\) nor \(false\).

The claim is that this is enough for satisfiability. If there are no contradictions in assignment, then the formula is satisfiable. To see this, I will propose this algorithm to construct a satisfying formula, assuming there are no \(x_i \leadsto \lnot x_i \land \lnot x_i \leadsto x_i\):

Pick a literal T such that there's no path from T to its negation, ~T.
If you've already assigned a value to T, pick another variable.
Set f(T) = true.
Set f(X) = true for every node X reachable from T, and f(~X) = false.
Continue from the top until you've assigned values to all variables.

The proof that this algorithm yields a satisfying assignment of variables \(f\) is left to the reader. It is not hard, but you’re likely to need the following lemma:

Lemma. If \(A \leadsto B\), then \(\lnot B \leadsto \lnot A\).

This is the usual contrapositive, which makes another appearance here as directed paths in a graph. With that lemma in hand, you should be able to prove that the algorithm:

  • Never has a conflict in assigning variables. That is, if I had previously assigned \(f(T) = true\), I will never want to assign \(f(T) = false\), and viceversa.
  • Is polynomial.
  • Terminates, and when it does, its result is a satisfying \(f\).

It is a short and relatively straightforward proof, so I recommend you give it a shot.

As an example, here is the code for quadratic C++11 2-SAT solver (which could be made linear using Kosaraju’s or Tarjan’s algorithms). It receives, as input from the standard input, an integer \(n\) representing the number of test cases. Then for each of the \(n\) test cases, it reads first the number of variables, and the number of clauses \(m\). Then it accepts \(m\) pairs of signed variables representing the \(m\) clauses: +A -C means \((a \lor \lnot c)\). The variables are uppercase ASCII letters. It returns an assignment of variables that satisfies each formula, or reports that the given formula is unsatisfiable. A sample input is here.

Conclusion

I hope you’ve found the proofs and algorithms above interesting. I recommend you try to reduce some problems to 3-SAT. Some easy ones are CLIQUE and VERTEX-COVER. Another straightforward but entertaining exercise is to create a simple SAT solver. An exponential backtracking algorithm should be done in about 10 lines of C++. You can also look at things like GRASP for a much more developed SAT solver.

As always, if you have any questions or comments, you can leave them below.

Cheers.