报告题目:Optimally linearizing the augmented Lagrangian method and ADMM
报告人:袁晓明 教授 (Hong Kong Baptist University)
主持人:王祥丰
报告时间:2017年12月19日周二10:30-11:30
报告地点:中北校区数学馆201
报告摘要:
The augmented Lagrangian method (ALM) is fundamental for solving canonical convex programming models with linear constraints, and the alternating direction method of multipliers (ADMM) is a splitting version of the ALM that is suitable for separable convex programming models. For various applications, especially a variety of sparsity-driven applications, at each iteration we need to linearize the subproblems of the ALM and ADMM so as to obtain easier subproblems (in sense of the availability of closed-form solutions) and alleviate the implementation. The linearization parameter should be large enough to theoretically ensure the convergence, while should not be too large to avoid small step sizes. In other words, we want to ideally find the smallest linearization parameters that can ensure the convergence of the linearized versions of ALM and ADMM. We show how we achieve this goal. The resulting linearized versions of the ALM and ADMM turn out to use positive-indefinite proximal regularization to regularize their subproblems. The parameters are smaller than those of the current work in the literature; theoretically they suffice to ensure the convergence of the resulting algorithms; numerically they accelerate the convergence immediately. We illustrate that these parameters are optimal and no further improvement is possible.