第六篇 視覺slam中的優化問題梳理及雅克比推導

  • 2019 年 11 月 10 日
  • 筆記

優化問題定義以及求解

通用定義

解決問題的開始一定是定義清楚問題。這裡引用g2o的定義。

[ begin{aligned} mathbf{F}(mathbf{x})&=sum_{kin mathcal{C}} underbrace{mathbf{e}_k(mathbf{x}_k,mathbf{z}_k)^top Omega_kmathbf{e}_k(mathbf{x}_k,mathbf{z}_k)}_{mathbf{F}_k} \ mathbf{x}^* &= underset{mathbf{x}}{operatorname{argmin}}mathbf{F}(mathbf{x}) end{aligned} tag{1} ]

  • (mathbf{x}=(mathbf{x}_1^top,dots,mathbf{x}_n^top)^top)(mathbf{x}_iin mathbf{x})為向量,表示一組參數;
  • (mathbf{x}_k=(mathbf{x}_{k_1}^top,dots,mathbf{x}_{k_q}^top)^topsubset mathbf{x}),第k次約束參數子集;
  • (mathbf{z}_k)可以當做觀測向量,(Omega_k)可以認為是觀測協方差矩陣,是個對稱矩陣;
  • (mathbf{e}_k(mathbf{x}_k,mathbf{z}_k))是誤差函數;

(mathbf{F}(mathbf{x}))其實就是總測量誤差的平方和,這裡簡單起見假設(Omega_k=begin{bmatrix}sigma_1^2&0 \ 0 & sigma_2^2end{bmatrix})
可以把(mathbf{F}_k(mathbf{x}))當做單次測量誤差平方和,假設(mathbf{e}_k(mathbf{x}_k,mathbf{z}_k)=(e_1,e_2)^top),展開看

[ begin{aligned} mathbf{F}_k(mathbf{x})&=mathbf{e}_k(mathbf{x}_k,mathbf{z}_k)^top Omega_kmathbf{e}_k(mathbf{x}_k,mathbf{z}_k) \ &=sigma_1^2e_1^2+sigma_2^2e_2^2 end{aligned} ]
問題就是求使得測量誤差平方和最小的參數的值。

求解最優問題

簡化誤差方程定義:(mathbf{e}_k(mathbf{x}_k,mathbf{z}_k) overset{def.}{=} mathbf{e}_k(mathbf{x}_k) overset{def.}{=} mathbf{e}_k(mathbf{x}))。誤差方程在值(breve{mathbf{x}})處進行一階泰勒級數近似展開:

[begin{aligned} mathbf{e}_k(breve{mathbf{x}}_k+Deltamathbf{x}_k) &=mathbf{e}_k(breve{mathbf{x}}+Deltamathbf{x}) \ &simeq mathbf{e}_k(breve{mathbf{x}})+mathbf{J}_kDeltamathbf{x} end{aligned} tag{2} ]
其中(mathbf{J}_k)(mathbf{e}_k(mathbf{x}))(breve{mathbf{x}})處的雅克比矩陣,代入(1)中得:

[ begin{aligned} mathbf{F}_k(breve{mathbf{x}}+Deltamathbf{x}) &= mathbf{e}_k(breve{mathbf{x}}+Deltamathbf{x})^topOmega_kmathbf{e}_k(breve{mathbf{x}}+Deltamathbf{x}) \ &simeq (mathbf{e}_k(breve{mathbf{x}})+mathbf{J}_kDeltamathbf{x})^topOmega_k(mathbf{e}_k(breve{mathbf{x}})+mathbf{J}_kDeltamathbf{x}) \ &=underbrace{(mathbf{e}_k(breve{mathbf{x}})^top+(mathbf{J}_kDeltamathbf{x})^top)}_{A^top+B^top = (A+B)^top}Omega_k(mathbf{e}_k(breve{mathbf{x}})+mathbf{J}_kDeltamathbf{x}) \ &= mathbf{e}_k(breve{mathbf{x}})^topOmega_kmathbf{e}_k(breve{mathbf{x}})+underbrace{mathbf{e}_k(breve{mathbf{x}})^topOmega_kmathbf{J}_kDeltamathbf{x}+(mathbf{J}_kDeltamathbf{x})^topOmega_kmathbf{e}_k(breve{mathbf{x}})}_{當A^TB為標量時,A^TB=B^TA}+Deltamathbf{x}^topmathbf{J}_k^topOmega_kmathbf{J}_kDeltamathbf{x} \ &=underbrace{mathbf{e}_k(breve{mathbf{x}})^topOmega_kmathbf{e}_k(breve{mathbf{x}})}_{標量c_k}+2underbrace{mathbf{e}_k(breve{mathbf{x}})^topOmega_kmathbf{J}_k}_{向量mathbf{b}_k^top}Deltamathbf{x}+Deltamathbf{x}^topunderbrace{mathbf{J}_k^topOmega_kmathbf{J}_k}_{矩陣mathbf{H}_k}Deltamathbf{x} \ &=c_k+2mathbf{b}_k^topDeltamathbf{x}+Deltamathbf{x}^topmathbf{H}_kDeltamathbf{x} end{aligned} tag{3} ]
因此

[ begin{aligned} mathbf{F}(breve{mathbf{x}}+Deltamathbf{x}) &=sum_{kin mathcal{C}} mathbf{F}_k(breve{mathbf{x}}+Deltamathbf{x}) \ &simeq sum_{kin mathit{C}} c_k+2mathbf{b}_kDeltamathbf{x}+Deltamathbf{x}^topmathbf{H}_kDeltamathbf{x} \ &= c+2mathbf{b}^topDeltamathbf{x}+Deltamathbf{x}^topmathbf{H}Deltamathbf{x} end{aligned} tag{4} ]
問題轉化為求(4)的最小值,求標量(mathbf{F}(breve{mathbf{x}}+Deltamathbf{x}))的微分

[ begin{aligned} dmathbf{F}(breve{mathbf{x}}+Deltamathbf{x}) &= 2mathbf{b}^top d(Deltamathbf{x}) + underbrace{d(Deltamathbf{x}^top)mathbf{H}Deltamathbf{x}}_{d(X^T) = (dX)^T}+Deltamathbf{x}^topmathbf{H}d(Deltamathbf{x}) \ &= 2mathbf{b}^top d(Deltamathbf{x}) + underbrace{(d(Deltamathbf{x}))^topmathbf{H}Deltamathbf{x}}_{當A^TB為標量時,A^TB=B^TA} + Deltamathbf{x}^topmathbf{H}d(Deltamathbf{x}) \ &= 2mathbf{b}^top d(Deltamathbf{x}) + underbrace{Deltamathbf{x}^topmathbf{H}^top d(Deltamathbf{x}) + Deltamathbf{x}^topmathbf{H}d(Deltamathbf{x})}_{Omega_k為對稱陣,因此H為對稱陣} \ &= 2(mathbf{b}^top + Deltamathbf{x}^topmathbf{H}^top)d(Deltamathbf{x}) \ &= 2(mathbf{b} + mathbf{H}Deltamathbf{x})^top d(Deltamathbf{x}) end{aligned} ]
對照(dmathbf{F}=frac{partial mathbf{F}}{partial Deltamathbf{x}}^Td(Deltamathbf{x})),得(frac{partial mathbf{F}}{partial Deltamathbf{x}}=mathbf{b} + mathbf{H}Deltamathbf{x})

(frac{partial mathbf{F}}{partial Deltamathbf{x}}=0),注意因為(mathbf{F})非負,所以極值處為極小值。

問題又轉為求解線性方程 (mathbf{H}Deltamathbf{x} = -mathbf{b}),所得到的解為(Deltamathbf{x}^*),增量更新(mathbf{x}^*=breve{mathbf{x}}+Deltamathbf{x}^*)。以次方式不斷迭代求最優問題。

優化庫

在實際的工程中,我們會使用優化庫求解這些優化問題。在使用這些優化庫的時候,我們只需要定義好誤差函數(mathbf{e}_k)計算誤差,誤差函數在某值處的雅克比矩陣(mathbf{J}_k),定義好觀測的協方差矩陣(Omega_k),優化庫便可以幫我們求解最優問題。優化庫有很多種,Ceres,g2o,gtsam等,Ceres自身有自動求導甚至不需要我們計算雅克比矩陣,但是搞清楚他們的優化原理還是很有必要的。

視覺SLAM中的優化問題

相機投影模型

已知相機內參(mathbf{K}=begin{bmatrix}f_x & 0 & c_x \ 0 & f_y & c_y \ 0 & 0 & 1end{bmatrix}),相機坐標系下空間點(mathbf{p}_{c}=[x_c,y_c,z_c]^topin mathbb{R}^3)投影到像平面點(mathbf{p}_{I}=[u,v]^topin mathbb{R}^2)的函數為:

[ begin{aligned} text{proj}(mathbf{p}_{c})&=[frac{1}{z_c}mathbf{K}mathbf{p}_{c}]_{1:2} \ &= begin{bmatrix}f_x & 0 & c_x \ 0 & f_y & c_y \ 0 & 0 & 1end{bmatrix}begin{bmatrix}x_c/z_c \ y_c/z_c \ 1 end{bmatrix}_{1:2} \ &= begin{bmatrix}f_x*x_c/z_c+c_x \ f_y*y_c/z_c+c_y end{bmatrix} end{aligned} ]

[ begin{aligned} frac{partial text{proj}(mathbf{p}_{c})}{partial mathbf{p}_{c}}&= begin{bmatrix}frac{partial u}{partial x_c} & frac{partial u}{partial y_c} & frac{partial u}{partial z_c} \ frac{partial v}{partial x_c} & frac{partial v}{partial y_c} & frac{partial v}{partial z_c} end{bmatrix}\ &= begin{bmatrix}f_x/z_c & 0 & -f_x*x_c/z_c^2 \ 0 & f_y/z_c & -f_y*y_c/z_c^2 end{bmatrix} end{aligned} tag{5} ]

立體視覺觀測函數

假設雙目相機的基線為(b),相機坐標系下空間點(mathbf{p}_{c}=[x_c,y_c,z_c]^topin mathbb{R}^3)投影到左右相機平面的坐標為([u_l,v_l]^top,[u_r,v_r]^top),假設是水平雙目,則有(u_l-u_r=frac{bf_x}{z_c}),那麼

[ u_r=u_l-frac{bf_x}{z_c}=f_x*x_c/z_c+c_x – frac{bf_x}{z_c} ]
[ begin{aligned} frac{partial u_r}{partial mathbf{p}_{c}} &= begin{bmatrix}frac{partial u_r}{partial x_c} & frac{partial u_r}{partial y_c} & frac{partial u_r}{partial z_c} end{bmatrix} \ &= begin{bmatrix}f_x/z_c & 0 & -f_x*(x_c-b)/z_c^2end{bmatrix} end{aligned} ]
整合起來有

[ begin{aligned} mathbf{z}_{stereo}&=binom{text{proj}(mathbf{p}_{c})}{u_r} \ &= begin{bmatrix}f_x*x_c/z_c+c_x \ f_y*y_c/z_c+c_y \ f_x*x_c/z_c+c_x – frac{bf_x}{z_c} end{bmatrix} end{aligned} ]
[ frac{partial mathbf{z}_{stereo}}{partial mathbf{p}_{c}} = begin{bmatrix}f_x/z_c & 0 & -f_x*x_c/z_c^2 \ 0 & f_y/z_c & -f_y*y_c/z_c^2 \ f_x/z_c & 0 & -f_x*(x_c-b)/z_c^2end{bmatrix} tag{6} ]

SO3、SE3定義及指數映射

[ SO(3) = begin{Bmatrix} mathbf{R}inmathbb{R}^{3times 3}|mathbf{R}mathbf{R}^top=mathbf{I},text{det}(mathbf{R})=1 end{Bmatrix} ]

[ mathfrak{s0}(3) = begin{Bmatrix} omega^wedge=left.begin{matrix}begin{bmatrix}0 & -omega_3 & omega_2\omega_3 & 0 & -omega_1 \ -omega_2 & omega_1 & 0end{bmatrix}end{matrix}right|omega=[omega_1,omega_2,omega_3]^topinmathbb{R}^3 end{Bmatrix} ]

(text{exp}(omega^wedge)in SO(3)),證明見羅德里格斯公式。

[ SE(3) = begin{Bmatrix} mathbf{T}=begin{bmatrix}mathbf{R} & mathbf{t} \ mathbf{0}^top & 1end{bmatrix}inmathbb{R}^{4times 4}|mathbf{R}in SO(3),mathbf{t}inmathbb{R}^3 end{Bmatrix} ]

[ mathfrak{se}(3) = begin{Bmatrix} epsilon^wedge=left.begin{matrix}begin{bmatrix}omega^wedge & v\ 0^top & 0end{bmatrix}end{matrix}right|omegainmathbb{R}^3,vinmathbb{R}^3,epsilon=[v,omega]^top end{Bmatrix} ]

[ begin{aligned} text{exp}(epsilon^wedge) &= underbrace{text{exp}{begin{bmatrix}omega^wedge & v\ 0^top & 0end{bmatrix}}}_{泰勒級數展開} \ &= mathbf{I} + begin{bmatrix}omega^wedge & v\ 0^top & 0end{bmatrix} + frac{1}{2!}begin{bmatrix}omega^{wedge2} & omega^wedge v\ 0^top & 0end{bmatrix} + frac{1}{3!}begin{bmatrix}omega^{wedge3} & omega^{wedge2} v\ 0^top & 0end{bmatrix} + dots \ &= begin{bmatrix}text{exp}(omega^wedge) & mathbf{V}v\ 0^top & 0end{bmatrix} in SE(3) ,mathbf{V}=mathbf{I}+frac{1}{2!}omega^{wedge} + frac{1}{3!}omega^{wedge2} + dots end{aligned} ]
實際上

[ mathbf{V} = left{begin{matrix} mathbf{I}+frac{1}{2}omega^{wedge}+frac{1}{6}omega^{wedge2} = mathbf{I}, & theta rightarrow 0 \ mathbf{I}+frac{1-cos(theta)}{theta^2}omega^{wedge}+frac{theta-sin(theta)}{theta^3}omega^{wedge2}, & else end{matrix}right. : : : with ::theta=left|omegaright|_2 ]

首先從最簡單的位姿優化開始。

位姿優化

已知圖像特徵點在圖像中的坐標集合(mathcal{P}_I=left{mathbf{p}_{I_1}, mathbf{p}_{I_2}, ldots, mathbf{p}_{I_n}right},mathbf{p}_{I_i}in mathbb{R}^2), 以及對應的空間坐標(mathcal{P}_w=left{mathbf{p}_{w_1}, mathbf{p}_{w_2}, ldots, mathbf{p}_{w_n}right},mathbf{p}_{w_i}in mathbb{R}^3),求解世界坐標繫到相機的變換矩陣(mathbf{T}_{cw}^*=begin{bmatrix} mathbf{R}_{cw}^* & mathbf{t}_{cw}^* \ 0^top & 1 end{bmatrix})的最優值。

定義誤差函數

假設變換矩陣的初始值為(mathbf{T}_{cw}=begin{bmatrix} mathbf{R}_{cw} & mathbf{t}_{cw} \ 0^top & 1 end{bmatrix}=text{exp}(xi_0^wedge ),xi^wedge_0in{mathfrak{se}(3)}),加在該初值的左擾動為(text{exp}(epsilon^wedge ))

單目誤差

[ mathbf{e}_k(xi)=mathbf{p}_{I_k} – text{proj}(text{exp}(xi^wedge )cdotmathbf{p}_{w_k}) ]
[ begin{aligned} mathbf{J}_k=frac{partial mathbf{e}_k}{partial epsilon} = -frac{partial text{proj}(mathbf{p}_{c})}{partial mathbf{p}_{c}}cdot left.begin{matrix} frac{partial text{exp}(epsilon^wedge )text{exp}(xi^wedge )cdotmathbf{p}_{w_k}}{partial epsilon}end{matrix}right|_{epsilon=0} end{aligned} ]

[ begin{aligned} left.begin{matrix} frac{partial text{exp}(epsilon^wedge )text{exp}(xi^wedge )cdotmathbf{p}_{w_k}}{partial epsilon} end{matrix}right|_{xi=xi_0, epsilon=0} &approx left.begin{matrix}frac{partialunderbrace{(I+epsilon^wedge )}_{泰勒展開近似}text{exp}(xi_0^wedge )cdotmathbf{p}_{w_k}}{partial epsilon}end{matrix}right|_{xi=xi_0, epsilon=0} \ &=left.begin{matrix}frac{partialepsilon^wedge text{exp}(xi_0^wedge )cdotmathbf{p}_{w_k}}{partial epsilon}end{matrix}right|_{xi=xi_0, epsilon=0} \ &=left.begin{matrix}frac{partial begin{bmatrix}omega^wedge & v \ 0^top & 0 end{bmatrix}begin{bmatrix}underbrace{mathbf{R}_{cw}*mathbf{p}_{w_k}+mathbf{t}_{cw}}_{mathbf{p}_c} \ 1end{bmatrix}}{partial epsilon}end{matrix}right|_{xi=xi_0, epsilon=0} \ &=left.begin{matrix}frac{partial begin{bmatrix}omega^wedgemathbf{p}_c+v end{bmatrix}_{3times 1}}{partial epsilon}end{matrix}right|_{epsilon=0} \ &=left.begin{matrix}frac{partial begin{bmatrix}-mathbf{p}_c^wedgeomega+v end{bmatrix}_{3times 1}}{partial epsilon}end{matrix}right|_{epsilon=0} \ &=left.begin{matrix}frac{partial -begin{bmatrix}0 & -z_c & y_c \ z_c & 0 & -x_c \ -y_c & x_c & 0 end{bmatrix}begin{bmatrix}omega_1 \ omega_2 \ omega_3 end{bmatrix}+begin{bmatrix}v_1 \ v_2 \ v_3 end{bmatrix}}{partial epsilon}end{matrix}right|_{epsilon=0} \ &=left.begin{matrix}frac{partial -begin{bmatrix}-z_c*omega_2+y_c*omega_3+v_1 \ z_c*omega_1-x_c*omega_3+v_2 \ -y_c*omega_1+x_c*omega_2+v_3 end{bmatrix}}{partial epsilon}end{matrix}right|_{epsilon=0} \ &= -begin{bmatrix}mathbf{I}_{3times 3} & mathbf{p}_c^wedgeend{bmatrix} end{aligned} ]
結合(5)有

[ begin{aligned} mathbf{J}_k=begin{bmatrix}f_x/z_c & 0 & -f_x*x_c/z_c^2 \ 0 & f_y/z_c & -f_y*y_c/z_c^2 end{bmatrix} cdot -begin{bmatrix}mathbf{I}_{3times 3} & mathbf{p}_c^wedgeend{bmatrix} end{aligned} ]

雙目誤差

[ mathbf{e}_k(xi)=mathbf{p}_{I_k} – mathbf{z}_{stereo}(text{exp}(xi^wedge )cdotmathbf{p}_{w_k}) ]
[ begin{aligned} mathbf{J}_k=frac{partial mathbf{e}_k}{partial epsilon} &= -frac{partial mathbf{z}_{stereo}(mathbf{p}_{c})}{partial mathbf{p}_{c}}cdot left.begin{matrix} frac{partial text{exp}(epsilon^wedge )text{exp}(xi^wedge )cdotmathbf{p}_{w_k}}{partial epsilon}end{matrix}right|_{epsilon=0} \ &= begin{bmatrix}f_x/z_c & 0 & -f_x*x_c/z_c^2 \ 0 & f_y/z_c & -f_y*y_c/z_c^2 \ f_x/z_c & 0 & -f_x*(x_c-b)/z_c^2end{bmatrix}cdot -begin{bmatrix}mathbf{I}_{3times 3} & mathbf{p}_c^wedgeend{bmatrix} end{aligned} ]

參考

  1. Giorgio Grisetti, Rainer Kummerle. g2o: A general Framework for (Hyper) Graph Optimization. 2017
  2. 高翔. 視覺SLAM十四講. 2017
  3. Strasdat H. Local accuracy and global consistency for efficient visual SLAM[D]. Department of Computing, Imperial College London, 2012.
  4. Lie Groups for 2D and 3D Transformations