Queuing design

In this example, we consider the design of an \(M/M/N\) queuing system, and we study the sensitivity of the design with respect to various parameters, such as the maximum total delay and total service rate.

This example was described in the paper Differentiating through Log-Log Convex Programs.

We consider the optimization of a (Markovian) queuing system, with \(N\) queues. A queuing system is a collection of queues, in which queued items wait to be served; the queued items might be threads in an operating system, or packets in an input or output buffer of a networking system. A natural goal to minimize the service load of the system, given constraints on various properties of the queuing system, such as limits on the maximum delay or latency. In this example, we formulate this design problem as a log-log convex program (LLCP), and compute the sensitivity of the design variables with respect to the parameters. The queuing system under consideration here is known as an \(M/M/N\) queue.

We assume that items arriving at the \(i\)th queue are generated by a Poisson process with rate \(\lambda_i\), and that the service times for the \(i\)th queue follow an exponential distribution with parameter \(\mu_i\), for \(i=1, \ldots, N\). The of the queuing system is a function \(\ell : \mathbf{R}^{N}_{++} \times \mathbf{R}^{N}_{++} \to \mathbf{R}^{N}_{++}\) of the arrival rate vector \(\lambda\) and the service rate vector \(\mu\), with components

\[\ell_i(\lambda, \mu) = \frac{\mu_i}{\lambda_i}, \quad i=1, \ldots, N.\]

(This is the reciprocal of the traffic load, which is usually denoted by \(\rho\).) Similarly, the queue occupancy, the average delay, and the total delay of the system are (respectively) functions \(q\), \(w\), and \(d\) of \(\lambda\) and \(\mu\), with components

\[q_i(\lambda, \mu) = \frac{\ell_i(\lambda, \mu)^{-2}}{1 - \ell_i(\lambda, \mu)^{-1}}, \quad w_i(\lambda, \mu) = \frac{q_i(\lambda, \mu)}{\lambda_i} + \frac{1}{\mu_i}, \quad d_i(\lambda, \mu) = \frac{1}{\mu_i - \lambda_i}\]

These functions have domain \(\{(\lambda, \mu) \in \mathbf{R}^{N}_{++} \times \mathbf{R}^{N}_{++} \mid \lambda < \mu \}\), where the inequality is meant elementwise. The queuing system has limits on the queue occupancy, average queuing delay, and total delay, which must satisfy

\[q(\lambda, \mu) \leq q_{\max}, \quad w(\lambda, \mu) \leq w_{\max}, \quad d(\lambda, \mu) \leq d_{\max},\]

where \(q_{\max}\), \(w_{\max}\), and \(d_{\max} \in \mathbf{R}^{N}_{++}\) are parameters and the inequalities are meant elementwise. Additionally, the arrival rate vector \(\lambda\) must be at least \(\lambda_{\mathrm{min}} \in \mathbf{R}^{N}_{++}\), and the sum of the service rates must be no greater than \(\mu_{\max} \in \mathbf{R}_{++}\).

Our design problem is to choose the arrival rates and service times to minimize a weighted sum of the service loads, \(\gamma^T \ell(\lambda, \mu)\), where \(\gamma \in \mathbf{R}^{N}_{++}\) is the weight vector, while satisfying the constraints. The problem is

\[\begin{split}\begin{array}{ll} \mbox{minimize} & \gamma^T \ell(\lambda, \mu) \\ \mbox{subject to} & q(\lambda, \mu) \leq q_{\max} \\ & w(\lambda, \mu) \leq w_{\max} \\ & d(\lambda, \mu) \leq d_{\max} \\ & \lambda \geq \lambda_{\mathrm{min}}, \quad \sum_{i=1}^{N} \mu_i \leq \mu_{\max}. \end{array}\end{split}\]

Here, \(\lambda, \mu \in \mathbf{R}^{N}_{++}\) are the variables and \(\gamma, q_{\max}, w_{\max}, d_{\max}, \lambda_{\mathrm{min}} \in \mathbf{R}^{N}_{++}\) and \(\mu_{\max} \in \mathbf{R}_{++}\) are the parameters. This problem is an LLCP. The objective function is a posynomial, as is the constraint function \(w\). The functions \(d\) and \(q\) are not posynomials, but they are log-log convex; log-log convexity of \(d\) follows from the composition rule, since the function \((x, y) \mapsto y - x\) is log-log concave (for \(0 < x < y\)), and the ratio \((x, y) \mapsto x/y\) is log-log affine and decreasing in \(y\). By a similar argument, \(q\) is also log-log convex.

import cvxpy as cp
import numpy as np
import time


mu = cp.Variable(pos=True, shape=(2,), name='mu')
lam = cp.Variable(pos=True, shape=(2,), name='lambda')
ell = cp.Variable(pos=True, shape=(2,), name='ell')

w_max = cp.Parameter(pos=True, shape=(2,), value=np.array([2.5, 3.0]), name='w_max')
d_max = cp.Parameter(pos=True, shape=(2,), value=np.array([2., 2.]), name='d_max')
q_max = cp.Parameter(pos=True, shape=(2,), value=np.array([4., 5.0]), name='q_max')
lam_min = cp.Parameter(pos=True, shape=(2,), value=np.array([0.5, 0.8]), name='lambda_min')
mu_max = cp.Parameter(pos=True, value=3.0, name='mu_max')
gamma = cp.Parameter(pos=True, shape=(2,), value=np.array([1.0, 2.0]), name='gamma')

lq = (ell)**(-2)/cp.one_minus_pos(ell**(-1))
q = lq
w = lq/lam + 1/mu
d = 1/cp.diff_pos(mu, lam)

constraints = [
    w <= w_max,
    d <= d_max,
    q <= q_max,
    lam >= lam_min,
    cp.sum(mu) <= mu_max,
    ell == mu/lam,
]

objective_fn = gamma.T @ ell

problem = cp.Problem(cp.Minimize(objective_fn), constraints)
problem.solve(requires_grad=True, gp=True, eps=1e-14, max_iters=10000, mode='dense')
4.457106781186705

The solution is printed below.

print('mu ', mu.value)
print('lam ', lam.value)
print('ell ', ell.value)
mu  [1.32842713 1.67157287]
lam  [0.82842712 1.17157287]
ell  [1.60355339 1.4267767 ]

Sensitvities

We compute the derivative of each variable with respect the parameters. One takeaway is that the solution is highly sensitive to the values of \(d_{\max}\) and \(\mu_{\max}\), and that increasing these parameters would decrease the service loads, especially on the first queue.

problem.solve(requires_grad=True, gp=True, eps=1e-14, max_iters=10000, mode='dense')

for var in [lam, mu, ell]:
    print('Variable ', var.name())
    print('Gradient with respect to first component')
    var.gradient = np.array([1., 0.])
    problem.backward()
    for param in problem.parameters():
        if np.prod(param.shape) == 2:
            print('{0}: {1:.3g}, {2:.3g}'.format(param.name(), param.gradient[0], param.gradient[1]))
        else:
            print('{0}: {1:.3g}'.format(param.name(), param.gradient))

    print('Gradient with respect to second component')
    var.gradient = np.array([0., 1.])
    problem.backward()
    for param in problem.parameters():
        if np.prod(param.shape) == 2:
            print('{0}: {1:.3g}, {2:.3g}'.format(param.name(), param.gradient[0], param.gradient[1]))
        else:
            print('{0}: {1:.3g}'.format(param.name(), param.gradient))

    var.gradient = np.zeros(2)
    print('')
Variable  lambda
Gradient with respect to first component
gamma: 0.213, -0.107
w_max: 5.43e-12, 5.64e-12
d_max: -0.411, -0.113
q_max: 5.99e-12, 4.77e-12
lambda_min: -1.56e-11, -7.35e-12
mu_max: 0.927
Gradient with respect to second component
gamma: -0.458, 0.229
w_max: 2.08e-11, 2.16e-11
d_max: -0.105, -0.466
q_max: 2.29e-11, 1.83e-11
lambda_min: -5.97e-11, -2.82e-11
mu_max: 1.01

Variable  mu
Gradient with respect to first component
gamma: 0.213, -0.107
w_max: 1.55e-11, 1.6e-11
d_max: -0.661, -0.113
q_max: 1.7e-11, 1.36e-11
lambda_min: -4.43e-11, -2.09e-11
mu_max: -0.0727
Gradient with respect to second component
gamma: -0.458, 0.229
w_max: 2.3e-11, 2.39e-11
d_max: -0.105, -0.716
q_max: 2.53e-11, 2.02e-11
lambda_min: -6.59e-11, -3.11e-11
mu_max: 0.00996

Variable  ell
Gradient with respect to first component
gamma: -0.245, 0.122
w_max: 2e-11, 2.08e-11
d_max: -0.282, -0.22
q_max: 2.21e-11, 1.76e-11
lambda_min: -5.74e-11, -2.71e-11
mu_max: -0.334
Gradient with respect to second component
gamma: 0.122, -0.0611
w_max: -1.24e-13, -1.29e-13
d_max: -0.101, -0.195
q_max: -1.37e-13, -1.09e-13
lambda_min: 3.58e-13, 1.66e-13
mu_max: -0.197

Perturbation analysis

Next, we perturb each parameter by a small amount, and compare the prediction of a first-order approximation to the solution of the perturbed problem to the true solution.

problem.solve(requires_grad=True, gp=True, eps=1e-14, max_iters=10000, mode='dense')

mu_value = mu.value
lam_value = lam.value

delta = 0.01
for param in problem.parameters():
    param.delta = param.value * delta

problem.derivative()

lam_pred = (lam.delta / lam_value) * 100
mu_pred = (mu.delta / mu_value) * 100

print('lam predicted (percent change): ', lam_pred)
print('mu predicted (percent change): ', mu_pred)


for param in problem.parameters():
    param._old_value = param.value
    param.value += param.delta
problem.solve(cp.SCS, gp=True, eps=1e-14, max_iters=10000)

lam_actual = ((lam.value - lam_value) / lam_value) * 100
mu_actual = ((mu.value - mu_value) / mu_value) * 100

print('lam actual (percent change): ', lam_actual)
print('mu actual (percent change): ', mu_actual)
lam predicted (percent change):  [2.32203282 1.77228841]
mu predicted (percent change):  [1.07166961 0.94304296]
lam actual (percent change):  [1.99504983 1.99504965]
mu actual (percent change):  [0.87148458 1.10213353]