LMS算法是自适应信号处理中最常见的算法之一,Least Mean Square最小均方算法是自适应系统最常见的算法,利用Eigen库的线性代数便利计算,得到最小均方误差MSE,程序如下
/*The example of LMS algorithm for Adaptive Filtering
*Least mean squares (LMS) algorithms are a class of adaptive filter
*used to mimic a desired filter by finding the filter coefficients
*that relate to producing the least mean square of the error signal
*(difference between the desired and the actual signal). It is a
*stochastic gradient descent method in that the filter is only
*adapted based on the error at the current time.
*The basic idea behind LMS filter is to approach the optimum filter
*weights {\displaystyle (R^{-1}P)} (R^{-1}P), by updating the filter
*weights in a manner to converge to the optimum filter weight. This
*is based on the gradient descent algorithm. The algorithm starts by
*assuming small weights (zero in most cases) and, at each step, by
*finding the gradient of the mean square error, the weights are updated.
*That is, if the MSE-gradient is positive, it implies, the error would
*keep increasing positively, if the same weight is used for further
*iterations, which means we need to reduce the weights. In the same way,
*if the gradient is negative, we need to increase the weights.
*/
#include <iostream>
#include <Eigen/Dense>
#include <algorithm>
#include <cmath>
using namespace std;
using namespace Eigen;
const int L=8; //system parameter
const double mu=0.14; //step size mu
/*Use FIR One-calss filtering system get the answer signal d*/
VectorXd filter(VectorXd f,int avg,VectorXd x,int sampleN){
VectorXd d=VectorXd::Zero(sampleN);
d[0]=f[0]*x[0];
for(int i=1;i<sampleN;++i){
for(int j=0;j<=i;++j){
int fj=(j>=8?7:j);
d[i]=d[i]+f[fj]*x[i-j];
}
d[i]-=d[i-1];
d[i]/=avg;
}
return d;
}
/*Get the input signal from the random signal U*/
VectorXd getFromInput(VectorXd u,int j){
VectorXd x=VectorXd::Zero(L);
for(int i=0;i<L;++i){
x[i]=u[j-i];
}
return x;
}
int main()
{
int i=0,j=0,k=0; //For loop
int K = 10; //independent run circle
int N = 1000; //sample node
long int sum=0;
VectorXd MSE=VectorXd::Zero(N); //mean least square
VectorXd unw=VectorXd::Random(L);
VectorXd n=VectorXd::Zero(L);
for(k=0;k<K;++k){
VectorXd w=VectorXd::Zero(L);
VectorXd u=VectorXd::Random(N);
VectorXd d=filter(unw,1,u,N);
for(i=L;i<N;++i){
VectorXd x=getFromInput(u,i);
double e=d[i]-x.transpose()*w;
MSE[i]=MSE[i]+pow(e,2.0);
}
}
for(k=0;k<N;++k){
MSE[k]=MSE[k]/K;
}
for(int i=0;i<N;++i){
sum=sum+MSE[i]/N;
}
}
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)