
Citation: | SUN Xiaohui, WEN Chenglin, WEN Tao. Maximum Correntropy High-Order Extended Kalman Filter[J]. Chinese Journal of Electronics, 2022, 31(1): 190-198. DOI: 10.1049/cje.2020.00.334 |
Filter has been applied in various fields, including navigation, national defense construction, neural network training and so on[1-6]. For general nominal linear systems, Kalman filter (KF) provides an optimal solution step-by-step[7]. For nonlinear problems, extended Kalman filter is the most typical method, which approximates the nonlinear system by its first order linearization[8]. Another nonlinear filters are based on sigma sampling, such as unscented Kalman filter (UKF), cubature Kalman filter (CKF), and so on[9-12], which have better filtering performance compared with EKF.
KF and its extensions in general have been established to meet the conditions of the standard KF[13-15]. Nevertheless, their performances may get worse when employed to non-Gaussian situations. This is because their objective function and criterion depend on large outliers, they are not suitable for non-Gaussian environments[16].
For non-Gaussian systems, Chen proposed maximum correntropy KF (MCKF)[17]. MCKF only needs a limited number of realizations for modeling error to establish a mean estimator of correntropy. Lately, the Maximum correentropy extended KF (MCEKF) that can solve nonlinear non-Gaussian noise systems have appeared based on EKF[16]. However, similar to EKF, MCEKF can realize the approximation of the nonlinear function not higher than the second order.
In the paper, we develop a novel high-order extended KF, called the maximum correntropy high-order KF (H-MCEKF) by combining MCC with a fixed-point iterative algorithm. Similar to EKF, the H-MCEKF not only retains the propagation process of state, but that of covariance matrix. Therefore, the new algorithm has two significant advantages: real-time and recursion. Unlike the first-order Taylor expansion of MCEKF, HMCEKF uses polynomials to have a full description for nonlinear functions. It should be emphasized that it is not difficult to convert a nonlinear function into a high-order polynomial by employing Taylor expansion or multi-dimensional Taylor network[17].
There are two contributions that can be shown in this paper. 1) The idea of converting a nonlinear function to a linear form is given: a) all high-order polynomials in the system are defined as implicit variables and treated as parameter variables; b) the original state model is equivalently formulated into a pseudo-linear form; c) an augmented linear state model is established by combing with pseudo-linear state model; d) the original measurement model is rewritten into linear form. 2) The solution of high-order polynomials are converted into KF.
This paper will be divided into several sections to describe and derive the design process of the new filter: Brief introduction in Section II; modeling the nonlinear non-Gaussian systems in Section III; linearizing the systems in Section IV; derivation process of H-MCEKF in Section V; simulations in Section VI and conclusions and future in Section VII finally.
For one-dimensional random variable
V(X,Y)=E[Υ(XY)]=∫Υ(XY)dFXY(x,y) | (1) |
where, E is expectation, and
Υ(XY)=Gσ(e)=exp(−e22σ2) | (2) |
where, e is the difference between x and y,
Perform Taylor expansion on Eq.(2),
Υ(XY)=∞∑p=0(−1)p2nσ2np!E{(x−y)2p} | (3) |
then, the correntropy of Eq.(1) can be formulated
V(X,Y)=E[Υ(XY)]=∞∑p=0(−1)p2pσ2pp!∫(x−y)2pdFXY(x,y)=∞∑p=0(−1)p2pσ2pp!E{(X−Y)2p} | (4) |
where,
In general, random variable pairs may be relatively easy to obtain, and
E{(X−Y)2p}=1n(n∑i=1(x(i)−y(i))2p) | (5) |
then, the correntrpy of random variable pairs
ˆV(X,Y)=E[Υ(XY)]=∫∞∑p=0(−1)p2pσ2pp!(x−y)2pdFXY(x,y)=∞∑p=0(−1)p2pσ2pp!1N(N∑j=1(x(j)−y(j))2p)=1N∞∑p=0Gσ(e(j)) | (6) |
When
ˆV(X,Y)=E{Υ(XY)}=1nNn∑i=1N∑j=1Gσ(e(j)i) | (7) |
where,
Remark 1 The larger the
Given a class of state models and observation models with strong nonlinear characteristics
x(k+1)=f(x(k))+w(k) | (8) |
y(k+1)=h(x(k+1))+v(k+1) | (9) |
where,
For ease of understanding, we simplify the representation of the high-dimensional linearization process based on Eq.(8) and Eq.(9).
For facilitate understanding, we only describe and establish the filter for the two-dimensional systems and suppose.
fi(x(u))=∑l1+l2=ll1,l2⩽lai,l1,l2xl11(u)xl22(u) | (10) |
where,
Definition 1
Definition 2
a(l)i:=[a(l)i;1,a(l)i;2,…,a(l)i;nl]=[ai;l,0,ai;l−1,1,…,ai;0,l]i=1,2 |
On the basis of Definition 1, 2 and Eq.(10), we have
[x(1)1(u+1)x(1)2(u+1)]=[a(1)1a(2)1…a(l)1…a(r)1a(1)2a(2)2…a(l)2…a(r)2]×[x(1)(u)x(2)(u)⋮x(l)(u)⋮x(r)(u)]+[w(1)1(u)w(1)2(u)] | (11) |
Let
x(u):=x(1)(u)=[x(1)1(u)x(1)2(u)],A(l):=[a(l)1a(l)2],w(u):=w(1)(u)=[x(1)1(u)x(1)2(u)] |
then,
x(1)(u+1)=A(1)x(1)(u)+r∑l=2A(l)x(l)(u)+w(1)(u) | (12) |
Similarly, suppose the measurement function in Eq.(9) is shown as follows
hi(x(1)(u+1))=∑r1+r2=rr1,r2⩽rhi,r1,r2xr11(u+1)xr22(u+1) | (13) |
Similar to Definition 1 and Definition 2, Eq.(10) and Eq.(11), Eq.(13) has the matrix form as following.
y(1)(u+1)=H(1)x(1)(u+1)+r∑l=2H(l)x(l)(u+1)+v(1)(u+1) | (14) |
In order to linearize the nonlinear functions, we build the dynamic model as follows
x(l)(u+1)=r∑u=1A(q)l(u)x(q)(u) | (15) |
where,
A(q)l(u)={I,l=q0,l≠q | (16) |
Combining Definition 1 and Definition 2, Eq.(11) and Eq.(12), state model Eq.(8) has further linear form
[x(1)(u+1)x(2)(u+1)⋮x(l)(u+1)⋮x(r)(u+1)]=[A(1)1(u)A(2)1(u)…A(u)1(u)…A(r)1(u)A(1)2(u)A(2)2(u)…A(u)2(u)…A(r)2(u)⋮⋮⋱⋮⋱⋮A(1)l(u)A(2)l(u)…A(u)l(u)…A(r)l(u)⋮⋮⋱⋮⋱⋮A(1)r(u)A(2)r(u)…A(u)r(u)…A(r)r(u)][x(1)(u)x(2)(u)⋮x(l)(u)⋮x(r)(u)]+[w(1)(u)w(2)(u)⋮w(l)(u)⋮w(r)(u)] | (17) |
Let
X(u)=[(x(1)(u))T(x(2)(u))T…(x(r)(u))T]TA(u+1,u)=[A(1)1(u)A(2)1(u)…A(r)1(u)A(1)2(u)A(2)2(u)…A(r)2(u)⋮⋮⋱⋮A(1)r(u)A(2)r(u)…A(r)r(u)]W(u)=[w(1)(u)w(2)(u)…w(r)(u)]T |
Eq.(17) is equivalently rewritten as follows
X_(u+1)=A_(u+1,u)X_(u)+W_(u) | (18) |
where,
Similarly, the linear matrix form of the measurement model (9) is
[y1(u+1)y2(u+1)]=[h(1)1(u+1)h(2)1…h(r)1h(1)2(u+1)h(2)2…h(r)2]×[x(1)(u+1)x(2)(u+1)⋮x(r)(u+1)]+[v1(u+1)v2(u+1)] | (19) |
On the basis of Eq.(19), we can obtain the linearization of Eq.(2).
Y_(u+1)=H_(u+1)X_(u+1)+V_(u+1) | (20) |
where,
For linear models Eq.(18) and Eq.(20), we have
[ˆX_(u+1|u)Y_(u+1)]=[IH_(u+1)]X_(u+1)+Δ(u+1) | (21) |
where,
u(u+1)=[−˜X_(u+1|u))V_(u+1)] | (22) |
where,
and
E[u(u+1)uT(u+1)]=[P_(u+1|u)00R_V(u+1)] | (23) |
where,
P_(u+1|u)=A_(u+1,u)P_(u|u)A_T(u+1,u)+Q(u) | (24) |
where
Q(u)=diag{Q(1)(u)…Q(r)(u)}Q(1)(u)=1NN∑j=1{[ω(1,j)(u)−ˉω(u)][ω(1,j)(u)−ˉω(u)]T}ˉω(u)=1NN∑j=1ω(j)(u) |
Similarly,
R_V(u+1)=1NN∑j=1{[v(j)(u+1)−ˉv(u+1)][v(j)(u+1)−ˉv(u+1)]T} | (25) |
where
E{u(u+1)uT(u+1)}=[B_X(u+1|u)B_TX(u+1|u)00B_Y(u+1)B_TY(u+1)]=B_(u+1)B_T(u+1) | (26) |
where,
Combing Eq.(26) and
D_(u+1)=S_(u+1)X_(u+1)+e(u+1) | (27) |
where
D_(u+1)=B_−1(u+1)[ˆX_(u+1|u)Y_(u+1)]S_(u+1)=B_−1(u+1)[IH_(u+1)]e(u+1)=B_−1(u+1)u(u+1) |
with
E{e(u)eT(u)}=E{[B_−1(u+1)u(u+1)][B_−1(u+1)u(u+1)]T}=B_−1(u+1)E{u(u+1)uT(u+1)}(B_−1(u+1))T=B_−1(u+1)B_(u+1)B_T(u+1)(B_−1(u+1))T=I | (28) |
Therefore, the components of the obtained random variable
We propose the following objective function for solving
JL(X_(u+1))=1LL∑i=1(1NN∑j=1Gσ(d(j)i(u+1)−si(u+1)X_(u+1)))=1LNL∑i=1N∑j=1Gσ(e(j)i) | (29) |
where,
According to Eq.(30), we can get the optimal solution.
∂JL(X_(u+1))∂X_(u+1)=0 | (30) |
Further
X_(u+1)=(N∑j=1L∑i=1Gσ(e(j)i(u+1))sTi(u+1)si(u+1))−1×(N∑j=1L∑i=1Gσ(e(j)i(u+1))sTi(u+1)d(j)i(u+1)) | (31) |
Considering that
X_(u+1)=f(X_(u+1)) | (32) |
Let
S_(u+1)=L∑i=1si(u+1)D_(u+1)=L∑i=1di(u+1)C_(u+1)=L∑i=1N∑j=1Gσ(e(j)i(u+1)) |
then
f(X_(u+1))=(S_T(u+1)C_(u+1)S_(u+1))−1×S_T(u+1)C_(u+1)D_(u+1) | (33) |
where
S_T(u+1)C(u+1)S_(u+1)=L∑i=1N∑j=1[sTi(u+1)Gσ(e(j)i(u+1))si(u+1)] | (34) |
S_T(u+1)C_(u+1)D_(u+1)=L∑i=1N∑j=1[sTi(u+1)Gσ(e(j)i(u+1))di(u+1)] | (35) |
From Ref [14], Eq.(34) and Eq.(35), we arrive at
[S_T(u+1)C_(u+1)S_(u+1)]−1=ˉP(u+1|u)−ˉP(u+1|u)H_T(u+1)×[H_(u+1)ˉP(u+1|u)H_T(u+1)ˉR(u+1)]−1×H_(u+1)ˉP(u+1|u) | (36) |
S_T(u+1)C_(u+1)D_(u+1)=ˉP−1(u+1|u)ˆX_(u+1|u)+H_T(u+1)ˉR−1(u+1)Y(u+1) | (37) |
where
ˉP(u+1|u)=BX(u+1))C−1X(u+1)BTX(u+1)ˉR(u+1)=BY(u+1))C−1Y(u+1)BTY(u+1) |
Further, we get
X_(u+1)=ˆX_(u+1|u)+ˉK(u+1|u)×[Y(u+1)−H_(u+1)ˆX_(u+1|u)] | (38) |
where
ˉK(u+1)=ˉP(u+1|u)H_T(u+1)×[H_(u+1)ˉP(u+1|u)H_T(u+1)+ˉR−1(u+1)]−1 | (39) |
Thus, the equivalent conversion is completed from fixed-point equation to Kalman filter.
With the above derivations, we summarize the proposed H-MCEKF algorithm. Select a proper
ˆX_(u|u)t+1=ˆX_(u|u)t+ˉK(u)t[Y(u)−H(u)ˆX_(u|u)t] | (40) |
where
ˉK(u)t=ˉP(u|u−1)tH_T(u)×(H_(u)ˉP(u|u−1)tH_T(u)+ˉR(u)t)−1 | (41) |
ˉP(u|u−1)t=B_X(u|u−1)C_X(u|u−1)tB_TX(u|u−1) | (42) |
ˉR(u)t=B_Y(u)C_Y(u)tB_TY(u) | (43) |
C_X(u)t=diag{G−1σ(e1(u)t),…,G−1σ(eL1(u)t)} | (44) |
C_Y(u)t=diag{G−1σ(eL1+1(u)t),…,G−1σ(eL1+m(u)t)} | (45) |
ˉei(u)t=d(j)i(u)−si(u)ˆX_(u|u)t | (46) |
If Eq.(47) holds,
‖ | (47) |
\begin{split} \underline P (u|u) =& E\left\{ {\underline {\tilde X} {{(u|u)}_t}{{\underline {\tilde X} }^{\rm{T}}}{{(u|u)}_t}} \right\} \\ = &E\left\{ {[\underline X (u) - \underline {\hat X} {{(u|u)}_t}]{{[ \underline X (u) - \underline {\hat X} {{(u|u)}_t}]}^{\rm{T}}}} \right\} \\ =& \left[ {I - \bar K{{(u)}_t}H(u)} \right]\underline P {(u|u - 1)}{\left[ {I - \bar K{{(u)}_t}H(u)} \right]^{\rm{T}}} \\ &+ \bar K{(u)_t}\bar R{(u)_t}{\bar K^{\rm{T}}}{(u)_t} \end{split} | (48) |
Remark 2 Without confusion, let
Remark 3 Eq.(44) and Eq.(45) contribute to the adjustment of the noise uncertainty. As the system iterates, Eq.(44) and Eq.(45) will gradually converge to 1.
Remark 4 In the polynomial expansion method in the paper, as the dimensionality increases, higher-order terms will show sparseness and its proportion will be less and less. We can use the method of pruning to model it again, so that the dimensionality can be controlled, and at the same time, the dynamic statistical properties of high-order terms will be retained.
This chapter verifies the performance of the proposed new filter through several simulations.
Giving the following systems shown in Eq.(8) and Eq.(9)
where
\begin{split} {f_1}(x(u)) =& {x_1}(u) + {x_2}(u) - \frac{1}{6}x_1^3(u) - \frac{1}{6}x_2^3(u)\\ & + \frac{1}{{120}}x_1^5(u) + \frac{1}{{120}}x_2^5(u) \\ {f_2}(x(u)) = &{x_1}(u) - \frac{1}{2}x_1^2(u) - \frac{1}{2}x_2^2(u)\\ &+ \frac{1}{{24}}x_1^4(u) + \frac{1}{{24}}x_2^4(u)\\ z(u + 1) = &x(u + 1) + v(u + 1) \end{split} |
The simulation is performed many times from 1-500, where
Case 1 only considers the process noise is non-Gaussian and takes
MSE of {x_{\rm{1}}} | MSE of {x_{\rm{2}}} | MSE | ||||||||
\sigma | \varepsilon | MCEKF | H-MCEKF | Improved | MCEKF | H-MCEKF | Improved | MCEKF | H-MCEKF | Improved |
\sigma {\rm{ = }}2 | \varepsilon {\rm{ = }}{10^{ - 4}} | 0.0469 | 0.0467 | 0.42% | 0.0226 | 0.0204 | 9.73% | 0.0348 | 0.0335 | 3.74% |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.0434 | 0.0421 | 3.00% | 0.0344 | 0.0256 | 25.58% | 0.0389 | 0.0338 | 13.11% | |
\sigma {\rm{ = }}5 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.0205 | 0.0189 | 7.80% | 0.0091 | 0.0089 | 2.20% | 0.0148 | 0.0139 | 6.08% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.0107 | 0.0099 | 7.47% | 0.0112 | 0.0083 | 25.89% | 0.0110 | 0.0091 | 17.27% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.0201 | 0.0178 | 11.44% | 0.0235 | 0.0195 | 17.02% | 0.0218 | 0.0187 | 14.22% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.0203 | 0.0111 | 45.32% | 0.0165 | 0.0148 | 10.30% | 0.0184 | 0.0130 | 29.34% | |
\sigma {\rm{ = }}10 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.0128 | 0.0105 | 17.96% | 0.0107 | 0.0083 | 22.42% | 0.0117 | 0.0094 | 19.65% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.0150 | 0.0140 | 6.67% | 0.0147 | 0.0131 | 18.88% | 0.0148 | 0.0136 | 8.11% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.0133 | 0.0092 | 30.82% | 0.0122 | 0.0108 | 11.48% | 0.0127 | 0.0100 | 21.26% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.0190 | 0.0169 | 11.05% | 0.0155 | 0.0116 | 25.16% | 0.0173 | 0.0142 | 17.92% | |
\sigma {\rm{ = }}15 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.0081 | 0.0071 | 12.35% | 0.0083 | 0.0061 | 26.50% | 0.0082 | 0.0066 | 19.51% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.0092 | 0.0074 | 19.56% | 0.0077 | 0.0055 | 28.57% | 0.0084 | 0.0064 | 23.80% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.0080 | 0.0069 | 13.75% | 0.0080 | 0.0057 | 28.75% | 0.0080 | 0.0063 | 21.25% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.0086 | 0.0073 | 15.11% | 0.0084 | 0.0053 | 36.90% | 0.0085 | 0.0063 | 25.88% |
Consider the nonlinear non-Gaussian system shown in Eq.(8) and Eq.(9).where
\begin{split} {f_1}(x(u)) =& {x_1}(u) + {x_2}(u) - \frac{1}{6}x_1^3(u) - \frac{1}{6}x_2^3(u)\\ &+ \frac{1}{{120}}x_1^5(u) + \frac{1}{{120}}x_2^5(u) \\ {f_2}(x(u)) =& {x_1}(u) - \frac{1}{2}x_1^2(u) - \frac{1}{2}x_2^2(u) + \frac{1}{{24}}x_1^4(u) \\ &+ \frac{1}{{24}}x_2^4(u)\\ {h_1}(x(u & + 1)) = {x_1}(u + 1) - x_1^3(u + 1)\\ {h_2}(x(u & + 1)) = {x_2}(u + 1) - x_2^3(u + 1) \end{split} |
The process noises are uncorrelated Gaussian white noises but the observation process noises are non-Gaussian noise with a mixed-Gaussian distribution. The simulation is performed many times for 1−500, where
Case 2 only considers the measurement noise is non- Gaussian and takes
MSE of {x_{\rm{1}}} | MSE of {x_{\rm{2}}} | MSE | ||||||||
\sigma | \varepsilon | MCEKF | H-MCEKF | Improved | MCEKF | H-MCEKF | Improved | MCEKF | H-MCEKF | Improved |
\sigma {\rm{ = }}2 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.5800 | 0.1351 | 76.70% | 0.0547 | 0.0363 | 33.64% | 0.3173 | 0.0857 | 72.99% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.5246 | 0.2246 | 57.18% | 0.0508 | 0.0412 | 18.89% | 0.2877 | 0.1329 | 53.80% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.3086 | 0.2535 | 17.85% | 0.1115 | 0.0453 | 57.56% | 0.2101 | 0.1494 | 28.89% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.4088 | 0.2095 | 48.75% | 0.0798 | 0.0617 | 22.68% | 0.2443 | 0.1356 | 44.49% | |
\sigma {\rm{ = }}5 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.1981 | 0.1950 | 1.56% | 0.0299 | 0.0287 | 4.01% | 0.1140 | 0.1119 | 1.82% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.2336 | 0.1843 | 21.10% | 0.0341 | 0.0271 | 0.53% | 0.1338 | 0.1057 | 21.00% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.1997 | 0.1804 | 9.66% | 0.0461 | 0.0343 | 25.59% | 0.1170 | 0.1133 | 3.16% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.2268 | 0.1910 | 15.78% | 0.0461 | 0.0245 | 46.85% | 0.1256 | 0.1185 | 5.65% | |
\sigma {\rm{ = }}10 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.5157 | 0.1795 | 65.19% | 0.0355 | 0.0272 | 23.38% | 0.2756 | 0.1034 | 62.48% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.5888 | 0.1336 | 77.31% | 0.0274 | 0.0273 | 0.36% | 0.3081 | 0.0805 | 73.87% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.5517 | 0.1667 | 69.78% | 0.0311 | 0.0260 | 16.39% | 0.2914 | 0.0964 | 66.92% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.5862 | 0.2497 | 57.40% | 0.0428 | 0.0409 | 4.44% | 0.3145 | 0.1453 | 53.80% | |
\sigma {\rm{ = }}15 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.5326 | 0.2681 | 49.66% | 0.0496 | 0.0461 | 7.06% | 0.2911 | 0.1571 | 46.03% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.5294 | 0.2383 | 54.98% | 0.0368 | 0.0331 | 10.05% | 0.2831 | 0.1357 | 52.06% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.5461 | 0.1566 | 71.32% | 0.0373 | 0.0346 | 7.23% | 0.2917 | 0.0956 | 67.22% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.5109 | 0.2092 | 59.05% | 0.0326 | 0.0212 | 34.96% | 0.2718 | 0.1152 | 57.61% |
Case 1 and Case 2 take
A novel maximum correntropy high-order extended Kalman filter (H-MCEKF) has been designed for nonlinear and non-Gaussian systems in this paper. 1) the nonlinear polynomials has been defined as implicit function variables, which transforms the state model into pseudo-linearization. 2) establishing the linear model between all implicit function variables; 3) then the state model have been equivalently rewritten to linear model by combing with original states and implicit variables, which is similar to the measurement model.
Function variables with additive form can directly employ the new filter for state estimation. For general nonlinear functions, multi-dimensional Taylor network can be employed to develop into additive polynomials. But this method is not perfect, and model uncertainty still exists. Therefore, there is still a lot of work to solve nonlinear non-Gaussian systems.
[1] |
C. B. Wen, Z. D. Wang, Q. Y. Liu, et al., “Recursive distributed filtering for a class of state-saturated systems with fading measurements and quantization effects,” IEEE Trans. on Systems, Man and Cybernetics: Systems, vol.48, no.6, pp.930–941, 2018. DOI: 10.1109/TSMC.2016.2629464
|
[2] |
T. Wen, Q. B. Ge, X. Lyu, et al., “A cost-effective wireless network migration planning method supporting high-security enabled railway data communication systems,” Journal of the Franklin Institute, vol.35, no.6, pp.114–121, 2019.
|
[3] |
S. Y. Ji and C. L. Wen, “Data preprocessing method and fault diagnosis based on evaluation function of information contribution degree,” Journal of Control Science and Engineering, vol.2018, no.1, pp.1–10, 2018.
|
[4] |
X. Guo, L. L. Sun, T. Wen, et al., “Adaptive transition probability matrix-based parallel IMM algorithm,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol.19, no.14, pp.1–10, 2019.
|
[5] |
S. P. Talebi, S. Kanna, and D. P. Mandic, “A distributed quaternion Kalman filter with applications to smart grid and target tracking,” IEEE Transactions on Signal and Information Processing over Networks, vol.2, no.4, pp.477–488, 2016.
|
[6] |
T. Wen, C.B. Wen, C. Roberts, et al., “Distributed filtering for a class of discrete-time systems over wireless sensor networks,” Journal of the Franklin Institute, vol.357, no.5, pp.3038–3055, 2020. DOI: 10.1016/j.jfranklin.2020.02.005
|
[7] |
G. Q. Wang, Y. G. Zhang, and X. D. Wang, “Iterated maximum correntropy unscented Kalman filters for non-Gaussian systems,” Signal Processing, vol.163, pp.87–94, 2019. DOI: DOI:10.1016/j.sigpro.2019.05.015
|
[8] |
B. D. O. Anderson and J. B. Moore, Optimal Filtering, New York: Prentice-Hall, 1979.
|
[9] |
R. V. D. Merwe and E. A. Wan, “Sigma-point Kalman filters for integrated navigation,” in Proc. of the 60th Annual Meeting of the Institute of Navigation, Dayton, pp.641−654, 2004.
|
[10] |
S. J. Julier and J. K. Uhlmann, “Unscented filtering and nonlinear estimation,” Proceedings of the IEEE, vol.92, no.3, pp.401–422, 2004. DOI: 10.1109/JPROC.2003.823141
|
[11] |
I. Arasaratnam and S. Haykin, “Cubature Kalman filters,” IEEE Transactions on Automatic Control, vol.54, no.6, pp.1254–1269, 2009.
|
[12] |
K. Ito and K. Xiong, “Gaussian filters for nonlinear filtering problems,” IEEE Transactions on Automatic Control, vol.45, no.5, pp.910–927, 2000. DOI: 10.1109/9.855552
|
[13] |
J. M. Morris, “The Kalman filter: A robust estimator for some classes of linear quadratic problems,” IEEE Trans. on Information Theory, vol.22, no.5, pp.526–534, 1976. DOI: 10.1109/TIT.1976.1055611
|
[14] |
Z. Z. Wu, J. H. Shi, X. Zhang, et al., “Kernel recursive maximum correntropy,” Signal Processing, vol.117, pp.11–16, 2015. DOI: 10.1016/j.sigpro.2015.04.024
|
[15] |
B. D. Chen, X. Liu, H. Q. Zhao, et al., “Maximum correntropy Kalman filter,” Automatica, vol.76, pp.70–77, 2017. DOI: 10.1016/j.automatica.2016.10.004
|
[16] |
X. Liu, H. Qu, J. H. Zhao, et al., “Extended Kalman filter under maximum correntropy criterion,” Int. Joint Conf. on Neural Networks, Vancouver, BC, pp.1733–1737, 2016.
|
[17] |
C. Zhang and H. S. Yan, “Identification of nonlinear time-varying system with noise based on multi-dimensional Taylor network with optimal structure,” Journal of Southeast University, vol.47, no.6, pp.1086–1093, 2017. (in Chinese)
|
1. | Zhang, Y., Dong, L. Research and application of visual synchronous positioning and mapping technology assisted by ultra wideband positioning technology. Systems and Soft Computing, 2025. DOI:10.1016/j.sasc.2025.200187 | |
2. | Chen, X., Lin, D., Li, H. et al. Minimum error entropy high-order extend Kalman filter with fiducial points. Applied Mathematics and Computation, 2025. DOI:10.1016/j.amc.2024.129113 | |
3. | Saha, J., Bhaumik, S. Robust Maximum Correntropy Kalman Filter. International Journal of Robust and Nonlinear Control, 2025, 35(3): 883-893. DOI:10.1002/rnc.7686 | |
4. | Wen, T., Wang, J., Cai, B. et al. A Dynamic Estimation Method for the Headway of Virtual Coupling Trains Utilizing the High-Order Extended Kalman Filter-Based Smoother. IEEE Transactions on Intelligent Transportation Systems, 2025. DOI:10.1109/TITS.2024.3524731 | |
5. | Wen, C.-L., Yang, L. Research survey on defense strategy of attack threat in cyber physical systems | [信息物理系统攻击威胁的防御策略综述]. Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2024, 41(12): 2224-2236. DOI:10.7641/CTA.2023.30195 | |
6. | Wang, J., Wen, T., Cai, B. et al. A high-order strong tracking filter based on a compensated adaptive model for predicting sudden changes in remaining useful life of a lithium-ion battery. Journal of Energy Storage, 2024. DOI:10.1016/j.est.2024.111494 | |
7. | Ding, L., Wen, C. High-Order Extended Kalman Filter for State Estimation of Nonlinear Systems. Symmetry, 2024, 16(5): 617. DOI:10.3390/sym16050617 | |
8. | Lu, T., Zhou, W., Tong, S. Improved maximum correntropy cubature Kalman and information filters with application to target tracking under non-Gaussian noise. International Journal of Adaptive Control and Signal Processing, 2024, 38(4): 1199-1221. DOI:10.1002/acs.3743 | |
9. | Cui, Y., Sun, X. Multi-Sensor Fusion Adaptive Estimation for Nonlinear Under-Observed System with Multiplicative Noise. Chinese Journal of Electronics, 2024, 33(1): 282-292. DOI:10.23919/cje.2022.00.364 | |
10. | Cheng, Z., Chen, X., Li, H. et al. Minimum Error Entropy High-Order Extended Kalman Filter. 2023. DOI:10.1109/ICCSI58851.2023.10303856 | |
11. | Sun, X., Wu, X., Wen, C. et al. Step-by-step Linearized Kalman Filter-Based Fault Detection of Workpiece Machining. 2023. DOI:10.1109/SAFEPROCESS58597.2023.10295877 | |
12. | Wang, M., Liu, W., Wen, C. A High-Order Kalman Filter Method for Fusion Estimation of Motion Trajectories of Multi-Robot Formation. Sensors, 2022, 22(15): 5590. DOI:10.3390/s22155590 | |
13. | Wang, X., Wang, J., Ma, X. et al. A Differential Privacy Strategy Based on Local Features of Non-Gaussian Noise in Federated Learning. Sensors, 2022, 22(7): 2424. DOI:10.3390/s22072424 | |
14. | Wen, C., Lin, Z. A Gradually Linearizing Kalman Filter Bank Designing for Product-Type Strong Nonlinear Systems. Electronics (Switzerland), 2022, 11(5): 714. DOI:10.3390/electronics11050714 | |
15. | Wu, H., Ma, X., Wen, C. Multilevel Fine Fault Diagnosis Method for Motors Based on Feature Extraction of Fractional Fourier Transform. Sensors, 2022, 22(4): 1310. DOI:10.3390/s22041310 | |
16. | Cui, T., Sun, X., Wen, C. A Novel Data Sampling Driven Kalman Filter Is Designed by Combining the Characteristic Sampling of UKF and the Random Sampling of EnKF. Sensors, 2022, 22(4): 1343. DOI:10.3390/s22041343 | |
17. | Kong, Y., Ma, X., Wen, C. A New Method of Deep Convolutional Neural Network Image Classification Based on Knowledge Transfer in Small Label Sample Environment. Sensors, 2022, 22(3): 898. DOI:10.3390/s22030898 | |
18. | Liu, X., Wen, C., Sun, X. Design Method of High-Order Kalman Filter for Strong Nonlinear System Based on Kronecker Product Transform. Sensors, 2022, 22(2): 653. DOI:10.3390/s22020653 |
MSE of {x_{\rm{1}}} | MSE of {x_{\rm{2}}} | MSE | ||||||||
\sigma | \varepsilon | MCEKF | H-MCEKF | Improved | MCEKF | H-MCEKF | Improved | MCEKF | H-MCEKF | Improved |
\sigma {\rm{ = }}2 | \varepsilon {\rm{ = }}{10^{ - 4}} | 0.0469 | 0.0467 | 0.42% | 0.0226 | 0.0204 | 9.73% | 0.0348 | 0.0335 | 3.74% |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.0434 | 0.0421 | 3.00% | 0.0344 | 0.0256 | 25.58% | 0.0389 | 0.0338 | 13.11% | |
\sigma {\rm{ = }}5 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.0205 | 0.0189 | 7.80% | 0.0091 | 0.0089 | 2.20% | 0.0148 | 0.0139 | 6.08% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.0107 | 0.0099 | 7.47% | 0.0112 | 0.0083 | 25.89% | 0.0110 | 0.0091 | 17.27% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.0201 | 0.0178 | 11.44% | 0.0235 | 0.0195 | 17.02% | 0.0218 | 0.0187 | 14.22% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.0203 | 0.0111 | 45.32% | 0.0165 | 0.0148 | 10.30% | 0.0184 | 0.0130 | 29.34% | |
\sigma {\rm{ = }}10 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.0128 | 0.0105 | 17.96% | 0.0107 | 0.0083 | 22.42% | 0.0117 | 0.0094 | 19.65% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.0150 | 0.0140 | 6.67% | 0.0147 | 0.0131 | 18.88% | 0.0148 | 0.0136 | 8.11% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.0133 | 0.0092 | 30.82% | 0.0122 | 0.0108 | 11.48% | 0.0127 | 0.0100 | 21.26% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.0190 | 0.0169 | 11.05% | 0.0155 | 0.0116 | 25.16% | 0.0173 | 0.0142 | 17.92% | |
\sigma {\rm{ = }}15 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.0081 | 0.0071 | 12.35% | 0.0083 | 0.0061 | 26.50% | 0.0082 | 0.0066 | 19.51% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.0092 | 0.0074 | 19.56% | 0.0077 | 0.0055 | 28.57% | 0.0084 | 0.0064 | 23.80% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.0080 | 0.0069 | 13.75% | 0.0080 | 0.0057 | 28.75% | 0.0080 | 0.0063 | 21.25% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.0086 | 0.0073 | 15.11% | 0.0084 | 0.0053 | 36.90% | 0.0085 | 0.0063 | 25.88% |
MSE of {x_{\rm{1}}} | MSE of {x_{\rm{2}}} | MSE | ||||||||
\sigma | \varepsilon | MCEKF | H-MCEKF | Improved | MCEKF | H-MCEKF | Improved | MCEKF | H-MCEKF | Improved |
\sigma {\rm{ = }}2 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.5800 | 0.1351 | 76.70% | 0.0547 | 0.0363 | 33.64% | 0.3173 | 0.0857 | 72.99% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.5246 | 0.2246 | 57.18% | 0.0508 | 0.0412 | 18.89% | 0.2877 | 0.1329 | 53.80% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.3086 | 0.2535 | 17.85% | 0.1115 | 0.0453 | 57.56% | 0.2101 | 0.1494 | 28.89% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.4088 | 0.2095 | 48.75% | 0.0798 | 0.0617 | 22.68% | 0.2443 | 0.1356 | 44.49% | |
\sigma {\rm{ = }}5 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.1981 | 0.1950 | 1.56% | 0.0299 | 0.0287 | 4.01% | 0.1140 | 0.1119 | 1.82% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.2336 | 0.1843 | 21.10% | 0.0341 | 0.0271 | 0.53% | 0.1338 | 0.1057 | 21.00% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.1997 | 0.1804 | 9.66% | 0.0461 | 0.0343 | 25.59% | 0.1170 | 0.1133 | 3.16% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.2268 | 0.1910 | 15.78% | 0.0461 | 0.0245 | 46.85% | 0.1256 | 0.1185 | 5.65% | |
\sigma {\rm{ = }}10 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.5157 | 0.1795 | 65.19% | 0.0355 | 0.0272 | 23.38% | 0.2756 | 0.1034 | 62.48% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.5888 | 0.1336 | 77.31% | 0.0274 | 0.0273 | 0.36% | 0.3081 | 0.0805 | 73.87% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.5517 | 0.1667 | 69.78% | 0.0311 | 0.0260 | 16.39% | 0.2914 | 0.0964 | 66.92% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.5862 | 0.2497 | 57.40% | 0.0428 | 0.0409 | 4.44% | 0.3145 | 0.1453 | 53.80% | |
\sigma {\rm{ = }}15 | \varepsilon {\rm{ = }}{10^{ - 1}} | 0.5326 | 0.2681 | 49.66% | 0.0496 | 0.0461 | 7.06% | 0.2911 | 0.1571 | 46.03% |
\varepsilon {\rm{ = }}{10^{ - 2}} | 0.5294 | 0.2383 | 54.98% | 0.0368 | 0.0331 | 10.05% | 0.2831 | 0.1357 | 52.06% | |
\varepsilon {\rm{ = }}{10^{ - 4}} | 0.5461 | 0.1566 | 71.32% | 0.0373 | 0.0346 | 7.23% | 0.2917 | 0.0956 | 67.22% | |
\varepsilon {\rm{ = }}{10^{ - 6}} | 0.5109 | 0.2092 | 59.05% | 0.0326 | 0.0212 | 34.96% | 0.2718 | 0.1152 | 57.61% |