运行这个程序的主要目的:深入理解deep autoencoder 的基本原理和基本架构,搞明白是如何搭建起来的,弄清它是如何训练学习的,又是如何提取目标的特征的,最终又是怎样分类的。
代码主程序如下:
mnistdeepauto.m
- <pre code_snippet_id="148729" snippet_file_name="blog_20140109_1_7166930" name="code" class="plain"><pre code_snippet_id="148729" snippet_file_name="blog_20140109_1_7166930" name="code" class="plain">% Version 1.000
- %
- % Code provided by Ruslan Salakhutdinov and Geoff Hinton
- %
- % Permission is granted for anyone to copy, use, modify, or distribute this
- % program and accompanying programs and documents for any purpose, provided
- % this copyright notice is retained and prominently displayed, along with
- % a note saying that the original programs are available from our
- % web page.
- % The programs and documents are distributed without any warranty, express or
- % implied. As the programs were written for research purposes only, they have
- % not been tested to the degree that would be advisable in any important
- % application. All use of these programs is entirely at the user's own risk.
-
-
- % This program pretrains a deep autoencoder for MNIST dataset-
- % 这个程序是关于MNIST数据库的深度自编码预训练
- % You can set the maximum number of epochs for pretraining each layer
- %在预训练各个隐藏层的时候,你可以设置epochs的最大值
- % and you can set the architecture of the multilayer net.
- %你可以设置多层网络的架构
-
- clear all %清除工作所有的变量
- close all %关闭其他的窗口
-
- maxepoch=10; %In the Science paper we use maxepoch=50, but it works just fine.
- numhid=1000; numpen=500; numpen2=250; numopen=30;</pre><pre code_snippet_id="148729" snippet_file_name="blog_20140109_2_7730833" name="code" class="plain">%设置各个隐藏层的神经元的个数;这个程序所采用的网络共有四层,你可以追踪这个变量,就很容易知道这四个变量的代表的含义。【1000 500 250 30】
-
- fprintf(1,'Converting Raw files into Matlab format \n');
- converter; </pre><pre code_snippet_id="148729" snippet_file_name="blog_20140109_3_2856028" name="code" class="plain">%作者提供的二进制数据需要转换Matlab格式的;当程序运行到第一个函数fread时,程序报错,提示说所产生的文件标志是错误的,建议采用fopen函数。
-
- fprintf(1,'Pretraining a deep autoencoder. \n');
- fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch);
-
- makebatches;
- [numcases numdims numbatches]=size(batchdata);
-
- fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid);
- restart=1;
- rbm;
- hidrecbiases=hidbiases;
- save mnistvh vishid hidrecbiases visbiases;
-
- fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen);
- batchdata=batchposhidprobs;
- numhid=numpen;
- restart=1;
- rbm;
- hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases;
- save mnisthp hidpen penrecbiases hidgenbiases;
-
- fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2);
- batchdata=batchposhidprobs;
- numhid=numpen2;
- restart=1;
- rbm;
- hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases;
- save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2;
-
- fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen);
- batchdata=batchposhidprobs;
- numhid=numopen;
- restart=1;
- rbmhidlinear;
- hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases;
- save mnistpo hidtop toprecbiases topgenbiases;
-
- backprop;
- </pre><br>
- <br>
- <pre></pre>
- <pre></pre>
- <pre></pre>
- <pre></pre>
- </pre>
原文地址:http://blog.csdn.net/liyuanhao_1114/article/details/18033223