Loading [MathJax]/jax/output/HTML-CSS/jax.js

Linear programming (LP), LP relaxations, and rounding

In this post, we’ll review linear systems and linear programming. We’ll then focus on how to use LP relaxations to provide approximate solutions to other (binary integer) problems that are NP-hard. Much of this post follows these randomized algorithms course notes [1].

Linear programming

One of the most basic ways of thinking of a problem is using linear equations. It turns out that linear problems are also easy to solve, whereas nonlinear problems are often hard to solve.

We can express a system of linear equations in matrix form as follows. Let ARm×n be the matrix of coefficients, xRn a vector of variables (that we are interested in), bRm a vector of real numbers. We can express a system of m equations and n variables as the matrix equation Ax=b.

We can also consider a system of linear inequalities by replacing some of the equalities with inequalities; i.e., Axb, where is a component-wise inequality. (Throughout this post, we will overload the symbol and other inequalities to denote both component-wise inequality and scalar inequalities). The set of solutions is called the feasible region and is a convex polytope. In the figure below, the edges of the convex polytope are the linear inequalities, and any point in this feasible region satisfies this set of inequalities.

asdf (Image source: EMT6680)

We are often interested in optimizing (e.g., minimizing or maximizing) some linear function subject to a set of linear inequalities (i.e., constraints). This is called a linear program (LP) and in its most general form can be expressed as: minimize cx subject to Axb,x0. Here c,b are known vectors of coefficients, A is a known matrix of coeffcients, and x is the unknown vector of the variables of interest.

(Fun history fact: linear programming was invented in 1939 by the Russian mathematician Leonid Kantorovich to optimize the organization of industrial production and allocation of a society’s resources.)

The optimal value of the objective function (that satisfies the constraints) x will be some vertex/corner/extreme point of the convex feasible region. The simplex algorithm is a method to enuerate the vertices one by one, decreasing the objective function at every step.

LP relaxations and approximation algorithms

In a number of problems, the unknown variables are required to be integers, which defines an integer program. Unlike linear programs, which can be solved efficiently, integer programs, while being more practical, are NP-hard. However, what we can do is relax these integer programs so that they are linear programs, by turning the integer constraints into linear inequalities.

Here we will look at three examples of LP relaxations. The first is an example that even though we solve the relaxed LP, we still end up with the integer solution. The other examples require some approximation achieved by rounding.

The assignment problem

We are interested in assigning n jobs to n factories, each job having some cost (raw materials, etc.). Let cij dnote the cost of assigning job i to factory j, and let xij denote the assignment of job i to factory j. Thus xij is a binary-valued variable and corresponds to a binary integer program.

To write the relaxed LP program, we turn this binary constraint into an inequality: xij0 and xij1, for all i,j[n]={1,,n}. But we also want each job to one factory and each factory to contain one job. So, we add the constraint nj=1xij=1 for each job i[n] to guarantee that each job i is only assigned to one factory, and the constraint ni=1xij=1 for all j[n] to ensure that each factory only gets a single job. This results in the relaxed LP program minimize ij[n]cijxij subject to 0xij1,i,j[n] nj=1xij=1,i[n] ni=1xij=1,j[n] This LP results in variables that are all binary and actually solves the assignment problem.

However, in general, the solution of the LP produces a fractional solution, instead of an integer. Thus, to solve the original integer program, we can round the fractional solution to get an integer solution. We now discuss several examples where the LP relaxation gives us a fractional solution and how we might round these solutions (e.g., deterministically with a threshold or random rounding).

Approximate rounding solutions

Deterministic rounding

The simplest way to round a fractional solution to a binary one is by picking some threshold, e.g., 1/2, and rounding up or down depending on if variable is greater than or less than the threshold. We now consider the weighed vertex cover problem, which is NP-hard, and show that this deterministic rounding procedure gives us a ``good enough” approximate solution; i.e., with high probability, we get something within (1+ϵ) of the true value.

LP relaxation for the weighted vertex cover problem

In the weighted vertex cover problem, we are given a graph G=(V,E) and some weight wi0 for each vertex i. A vertex cover is a subset of SV such that every edge eE has at least one vertex vS. The optimization problem is: find a vertex cover with minimum total weight. The relaxed LP program is: minimize f(x)=wx subject to 0x1 xi+xj1,{i,j}E The first line is the objective, i.e., the total weight, where xi is a variable that represents whether vertex i is in the cover S. The second line gives the relaxed inequality, and the last line guarantees that we have a vertex cover, i.e., that for every each edge {i,j}E, we have at least one vertex that is in the cover (either xi=1 and/or xj=1).

How good is deterministic rounding?

Suppose O is the optimal value of this program, and let V be the minimum weight of the vertex cover problem. We have that OV since every binary integer solution is also a fractional solution.

Now apply deterministic rounding to get a new set S, where every vertex i with xi12 is in the set, and otherwise not in S. This produces a vertex cover because for every edge {i,j}E, we know that we must satisfy the constraint xi+xj1, which implies that xi12 and\or xj12.

Since the optimal value at the solution is O=ni=1wixi, when we apply the rounding, we only pick the vertices that are greater than 12, and so the resulting weight of S is at most twice the weight of the optimal value of the LP, i.e., 2O. Thus, we have a resulting vertex cover with cost within a factor of 2 of the optimum cost.

Randomized rounding

Instead of the deterministic thresholding procedure, we could consider a randomized rounding procedure. One simple randomized rounding procedure is round the fractional variable xi up to 1 with probability xi (and down with probability 1xi). The rounded variable therefore has expectation xi, and by linearity, the expectation of a linear constraint (i.e., cx=d) that the fractional x satisfies is, in expectation, satisfied by the rounded variable.

We will consider using this randomized rounding procedure for the MAX-2SAT problem. In this problem, we have n boolean variables x1,,xn, and a set of clauses of two literals yz, where each literal is either some xi or its negation ¬xi. The goal is to find an assignment that maximizes the number of satisfied clauses yz (i.e., the clauses is true).

LP relaxation for MAX-2SAT

For this problem, we will form the following LP relaxation. Let J be a set of clauses, and yj=(yj1,yj2) the literals in clause j; and so yj1 is xi if the first literal in clause j is xi, and 1xi if it is the negation.

We will define the variable zj for each clause j, where zj={1 if clause j satisfied0 otherwise.

Thus, taking the sum over all zj gives us the total number of satisfied clauses, and thus defines our objective function.

So, we can express this as the relaxed LP

maximize jJzjsubject to 1x00z11yjzjj,

where 1:=[1,1]. Here x and z are both relaxed to be fractional, and the last line ensures that the logic of the clauses holds.

How good is randomized rounding?

Now let O be the optimal value of the this LP, and M denote the number of clauses satisfied by the best assignment in MAX-2SAT. We have that MO.

Now apply randomized rounding to the fractional solution to get the 0/1 assignment. It turns out that the expected number of clauses satisfied is at least 34 the optimal value of the LP O.

We can have j clauses that are either of size 1 or size 2. If the clause is of size 1, xr, then the probability it is satisfied is xr. But we must have xrzj from the constrainst, so the probability this clause is satisfied is at least 34zj.

Now suppose we have a clause of size 2, e.g., xrxs. Note that the probability of success after randomized rounding is 1 minus the product of if both are 0, i.e., 1(1xr)(1xs)=xs+xrxrxs, but since we know from the constraint that zjxr+xs, we can write that the probability of success is at least xs+xrxrxsxr+xs14(xr+xs)2zj14z2j34zj.

So, since all clauses individually are satisfied with probability at least 34zj, by linearity of expectation, the expected number of clauses satisfied is at least 34O.

Summary

Here we have discussed approximate solutions to integer programs via LP relaxations and two simple rounding schemes. Other problems likely require more complicated rounding schemes. Here we discussed two problems where simple deterministic and random rounding give easy to analyze approximate solutions are are within a constant factor of the solution. Many more rounding schemes and algorithms can be found in Williamson and Shmoys (2010) [2].

Resources

  1. Arora, Kothari, Weinberg (2017). Advanced Algorithm Design Notes

  2. Williamson and Shmoys (2010). Design of Approximation Algorithms

Tweet