本站分享:AI、大数据、数据分析师培训认证考试,包括:Python培训Excel培训Matlab培训SPSS培训SAS培训R语言培训Hadoop培训Amos培训Stata培训Eviews培训

用spss做岭回归的详细步骤_岭回归 spss步骤

spss培训 cdadata 39799℃

用spss做岭回归的详细步骤

关键词:岭回归 spss步骤,spss如何做岭回归,spss怎么做岭回归分析

方法一:

1、做多自变量的线性回归,在统计量面板内选:共线性诊断(L);

2、如结果中的方差膨胀系数(VIF)>5,则可做岭回归分析

3、新建语法编辑器,输入如下命令:
INCLUDE ‘安装目录\Ridge regression.sps’. RIDGEREG DEP=因变量名 /ENTER = 自变量名(用空格分开)/START=0 /STOP=1[或其它数值] /INC=0.05[或其它搜索步长]/K=999 .

4、选择运行全部,得到各自变量岭迹图和决定系数R2与K值的关系图,在图上作参考线,取一岭迹平稳并且R2值较大的平衡点的K值;

5、将语法编辑器中的K值改为所选K值,再运行全部,得到详细的最终模型参数。

【关于系数显著性的补充:】
我自己安装的SPSS不能给出岭回归的显著性检验,也不知道是什么原因。  打开Ridge regression.sps,里面是密密麻麻关于岭回归的程序。还好,每个主要步骤都作了说明。查找一下,找到了岭回归系数估计的部分,果然没有显著性检验的语句。
    还好,通过原有语句不难得到检验统计量的值。但是仅仅给出这个值,对于做检验而言,很不方便。跟一般回归一样,要给出显著性P值才好。这个关系不难根据P值意义得到,关键是相应的程序语句怎么写。对照系数估计上面方差分析的部分(方差分析部分给出了F检验显著性P值),尝试着写程序,终于成功!保存之后,再做岭回归就能给出显著性检验的P值了!
    没用过别的SPSS版本,有的版本好像没这个问题。提出这个方法,希望可以帮到有同样问题的人。
    附:在Ridge regression.sps中添加的语句(红色倾斜的部分,就是添加的语句,当然,为了给出相应结果,原有语句作了少许改动。)
*—————————————————————————.
* Calculate raw coefficients from standardized ones, compute standard errors
* of coefficients, and an intercept term with standard error. Then print
* out similar to REGRESSION output.
*—————————————————————————(从这里开始是给出系数估计)
. compute beta={b;0}.
. compute b= ( b &/ std ) * sy.
. compute intercpt=ybar-t(b)*t(xmean).
. compute b={b;intercpt}.
. compute xpx=(sse/(sst*(n-nv-1)))*inv(xpx+(k &* ident(nv,nv)))*xpx*
                                 inv(xpx+(k &* ident(nv,nv))).
. compute xpx=(sy*sy)*(mdiag(1 &/ std)*xpx*mdiag(1 &/ std)).
. compute seb=sqrt(diag(xpx)).
. compute seb0=sqrt( (sse)/(n*(n-nv-1)) + xmean*xpx*t(xmean)).
. compute seb={seb;seb0}.
. compute rnms={varname,’Constant’}.
. compute ratio=b &/ seb.
. compute ppp=2*(1-tcdf(abs(ratio),n-nv-1)).
. compute bvec={b,seb,beta,ratio,ppp}.
. print bvec/title=’————–Variables in the Equation—————-‘
  /rnames=rnms /clabels=’B’ ‘SE(B)’ ‘Beta’ ‘T’ ‘sig’.
                        
. print /space=newpage.
end if.

方法二:

可以直接在spss里面做,spss18里面已经比较完善了。

步骤如下:回归——》最优尺度——》规则化(里面有岭回归)。


spss做岭回归

岭回归可以下载相关模块,也可以自己编程来实现。大部分人都会选择后一种方法。这个主要是因为代码很简洁,很容易编写。代码如下:
INCLUDE’d:\spss20.0\Ridge Regression.sps’.
Ridgereg enter=X1 X2 X3
/dep=y
诺,就这么三行。第一行单引号里边填写你的spss安装目录。比如我的按在d盘下面,所以我就填d:\spss20.0,如果你的按在c盘,那就填C盘呗。然后目录后边那个ridge regression,是最小二乘平方的宏的调用。然后第二行X1,X2,X3的位置填写你的自变量的名字。有几个就填几个。中间用空格隔开。第三行y的位置填你的因变量。运行的时候,打开文件——新建——语法,进入语法编辑器窗口,输入上边的代码,然后点运行——全部就可以了。结果会有一个系数表,这个表的第一列是K值,第二列是决定系数,第三列往后是你的自变量。其中k值会从0开始增大,同时决定系数也会慢慢变小,最终趋于稳定。(岭回归舍弃了一定的信息,从而改善了多重共线性)要从这张表里边选取合适的k值,使决定系数尽量大,同时尽量稳定。选好k值就可以参照系数写出方程了。此外在岭回归里边是不会输出常数的。这也是和一般回归方法的一个不同之处。


preserve.
set printback=off.
define ridgereg (enter=!charend(‘/’)
/dep = !charend(‘/’)
/start=!default(0) !charend(‘/’)
/stop=!default(1) !charend(‘/’)
/inc=!default(.05) !charend(‘/’)
/k=!default(999) !charend(‘/’)
/debug=!DEFAULT (‘N’)!charend(‘/’)  ).

preserve.
!IF ( !DEBUG !EQ ‘N’) !THEN
set printback=off mprint off.
!ELSE
set printback on mprint on.
!IFEND .
SET mxloops=200.
*—————————————————————————.
* Save original active file to give back after macro is done.
*—————————————————————————.
!IF (!DEBUG !EQ ‘N’) !THEN
SET RESULTS ON.
DO IF $CASENUM=1.
PRINT / “NOTE: ALL OUTPUT INCLUDING ERROR MESSAGES HAVE BEEN TEMPORARILY”
/ “SUPPRESSED. IF YOU EXPERIENCE UNUSUAL BEHAVIOR, RERUN THIS”
/ “MACRO WITH AN ADDITIONAL ARGUMENT /DEBUG=’Y’.”
/ “BEFORE DOING THIS YOU SHOULD RESTORE YOUR DATA FILE.”
/ “THIS WILL FACILITATE FURTHER DIAGNOSIS OF ANY PROBLEMS.”.
END IF.
!IFEND .
save outfile=’rr__tmp1.sav’.
*—————————————————————————.
* Use CORRELATIONS to create the correlation matrix.
*—————————————————————————.
* DEFAULT:  SET RESULTS AND ERRORS OFF TO SUPPRESS CORRELATION PIVOT TABLE *.
!IF (!DEBUG=’N’) !THEN
set results off errors off.
!IFEND

correlations variables=!dep !enter /missing=listwise/matrix out(*).
set errors on results listing .
*—————————————————————————.
* Enter MATRIX.
*—————————————————————————.
matrix.
*—————————————————————————.
* Initialize k, increment, and  number of iterations. If k was not
* specified, it is 999 and looping will occur. Otherwise, just the one
* value of k will be used for estimation.
*—————————————————————————.
do if (!k=999).
. compute k=!start.
. compute inc=!inc.
. compute iter=trunc((!stop – !start ) / !inc ) + 1.
. do if (iter <= 0).
.   compute iter = 1.
. end if.
else.
. compute k=!k.
. compute inc=0.
. compute iter=1.
end if.
*—————————————————————————.
* Get data from working matrix file.
*—————————————————————————.
get x/file=*/names=varname/variable=!dep !enter.
*—————————————————————————.
* Third row of matrix input is the vector of Ns. Use this to compute number
* of variables.
*—————————————————————————.
compute n=x(3,1).
compute nv=ncol(x)-1.
*—————————————————————————.
* Get variable names.
*—————————————————————————.
compute varname=varname(2:(nv+1)).
*—————————————————————————.
* Get X’X matrix (or R, matrix of predictor correlations) from input data
* Also get X’Y, or correlations of predictors with dependent variable.
*—————————————————————————.
compute xpx=x(5:(nv+4),2:(nv+1)).
compute xy=t(x(4,2:(nv+1))).
*—————————————————————————.
* Initialize the keep matrix for saving results, and the names vector.
*—————————————————————————.

compute keep=make(iter,nv+2,-999).
compute varnam2={‘K’,’RSQ’,varname}.
*—————————————————————————.
* Compute means and standard deviations. Means are in the first row of x and
* standard deviations are in the second row. Now that all of x has been
* appropriately stored, release x to maximize available memory.
*—————————————————————————.
compute xmean=x(1,2:(nv+1)).
compute ybar=x(1,1).
compute std=t(x(2,2:(nv+1))).
compute sy=x(2,1).
release x.
*—————————————————————————.
* Start loop over values of k, computing standardized regression
* coefficients and squared multiple correlations. Store results
*—————————————————————————.
loop l=1 to iter.
. compute b = inv(xpx+(k &* ident(nv,nv)))*xy.
. compute rsq= 2* t(b)*xy – t(b)*xpx*b.
. compute keep(l,1)=k.
. compute keep(l,2)=rsq.
. compute keep(l,3:(nv+2))=t(b).
. compute k=k+inc.
end loop.
*—————————————————————————.
* If we are to print out estimation results, compute needed pieces and
* print out header and ANOVA table.
*—————————————————————————.
do if (!k <> 999).
.!let !rrtitle=!concat(‘****** Ridge Regression with k = ‘,!k).
.!let !rrtitle=!quote(!concat(!rrtitle,’ ****** ‘)).
. compute sst=(n-1) * sy **2.
. compute sse=sst * ( 1 – 2* t(b)*xy + t(b)*xpx*b).
. compute ssr = sst – sse.
. compute s=sqrt( sse / (n-nv-1) ).
. print /title=!rrtitle /space=newpage.
. print {sqrt(rsq);rsq;rsq-nv*(1-rsq)/(n-nv-1);s}
/rlabel=’Mult R’ ‘RSquare’ ‘Adj RSquare’ ‘SE’
/title=’ ‘.
. compute anova={nv,ssr,ssr/(nv);n-nv-1,sse,sse/(n-nv-1)}.
. compute f=ssr/sse * (n-nv-1)/(nv).
. print anova
/clabels=’df’ ‘SS’,’MS’
/rlabel=’Regress’ ‘Residual’
/title=’         ANOVA table’
/format=f9.3.
.  compute test=ssr/sse * (n-nv-1)/nv.
.  compute sigf=1 – fcdf(test,nv,n-nv-1).
.  print {test,sigf} /clabels=’F value’ ‘Sig F’/title=’ ‘.
*—————————————————————————.

* Calculate raw coefficients from standardized ones, compute standard errors
* of coefficients, and an intercept term with standard error. Then print
* out similar to REGRESSION output.
*—————————————————————————
. compute beta={b;0}.
. compute b= ( b &/ std ) * sy.
. compute intercpt=ybar-t(b)*t(xmean).
. compute b={b;intercpt}.
. compute xpx=(sse/(sst*(n-nv-1)))*inv(xpx+(k &* ident(nv,nv)))*xpx*
inv(xpx+(k &* ident(nv,nv))).
. compute xpx=(sy*sy)*(mdiag(1 &/ std)*xpx*mdiag(1 &/ std)).
. compute seb=sqrt(diag(xpx)).
. compute seb0=sqrt( (sse)/(n*(n-nv-1)) + xmean*xpx*t(xmean)).
. compute seb={seb;seb0}.
. compute rnms={varname,’Constant’}.
. compute ratio=b &/ seb.
. compute bvec={b,seb,beta,ratio}.
. print bvec/title=’————–Variables in the Equation—————-‘
/rnames=rnms /clabels=’B’ ‘SE(B)’ ‘Beta’ ‘B/SE(B)’.
. print /space=newpage.
end if.
*—————————————————————————.
* Save kept results into file. The number of cases in the file will be
* equal to the number of values of k for which results were produced. This
* will be simply 1 if k was specified.
*—————————————————————————.
save keep /outfile=’rr__tmp2.sav’ /names=varnam2.
*—————————————————————————.
* Finished with MATRIX part of job.
*—————————————————————————.
end matrix.
*—————————————————————————.
* If doing ridge trace, get saved file and produce table and plots.
*—————————————————————————.

!if (!k = 999) !then
get file=’rr__tmp2.sav’.
print formats k rsq (f6.5) !enter (f8.6).
report format=list automatic
/vars=k rsq !enter
/title=center ‘R-SQUARE AND BETA COEFFICIENTS FOR ESTIMATED VALUES OF K’.

plot
/format=overlay /title=’RIDGE TRACE’
/horizontal ‘K’
/vertical ‘RR Coefficients’
/plot !enter with k
/title=’R-SQUARE VS. K’
/horizontal ‘K’
/vertical ‘R-Square’
/plot rsq with k.
!ifend.

*—————————————————————————.
* Get back original data set and restore original settings.
*—————————————————————————.
get file=rr__tmp1.sav.
restore.
!enddefine.
restore.

转载请注明:数据分析 » 用spss做岭回归的详细步骤_岭回归 spss步骤

喜欢 (21)or分享 (0)