<code data-enlighter-language="python" class="EnlighterJSRAW">model = Sequential() <em># 输入: 3 通道 100x100 像素图像 -> (100, 100, 3) 张量。</em> <em># 使用 32 个大小为 3x3 的卷积滤波器。</em> model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(100, 100, 3))) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(10, activation='softmax'))</code>
深度学习之-LeNET模型的结构
mu = 0 sigma = 0.1 weights = { 'wc1': tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma)), #truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma) 'wc2': tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma)), 'wf1': tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma)), 'wf2': tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma)), 'out': tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))} biases = { 'bc1': tf.Variable(tf.zeros([6])), 'bc2': tf.Variable(tf.zeros([16])), 'bf1': tf.Variable(tf.zeros([120])), 'bf2': tf.Variable(tf.zeros([84])), 'out': tf.Variable(tf.zeros([10]))} #Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6. out1 = tf.nn.conv2d(x, weights['wc1'], strides=[1, 1, 1, 1], padding='VALID') out1 = tf.nn.bias_add(out1,biases['bc1']) #Activation. out1 = tf.nn.relu(out1) #Pooling. Input = 28x28x6. Output = 14x14x6. out1 = tf.nn.max_pool( out1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') #Layer 2: Convolutional. Output = 10x10x16. out2 = tf.nn.conv2d(out1, weights['wc2'], strides=[1, 1, 1, 1], padding='VALID') out2 = tf.nn.bias_add(out2,biases['bc2']) #Activation. out2 = tf.nn.relu(out2) #Pooling. Input = 10x10x16. Output = 5x5x16. out2 = tf.nn.max_pool( out2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') #Flatten. Input = 5x5x16. Output = 400. flat = flatten(out2) #Layer 3: Fully Connected. Input = 400. Output = 120. out3 = tf.matmul(flat,weights['wf1']) out3 = tf.nn.bias_add(out3,biases['bf1']) #Activation. out3 = tf.nn.relu(out3) #Layer 4: Fully Connected. Input = 120. Output = 84. out4 = tf.matmul(out3,weights['wf2']) out4 = tf.nn.bias_add(out4,biases['bf2']) #Activation. out4 = tf.nn.relu(out4) #Layer 5: Fully Connected. Input = 84. Output = 10. logits = tf.matmul(out4,weights['out']) logits = tf.nn.bias_add(logits,biases['out']) return logits
Pickle模块的用法
pickle.dump( data, open( "save.p", "wb" ) )
data= pickle.load( open( "save.p", "rb" ) )
用Python处理矩阵运算
numpy格式的矩阵乘以一个标量则每个元素都乘以这个标量。
numpy格式的矩阵加上一个标量则每个元素都加上这个标量。
numpy的一般用法
numpy.matmul()
的用法:
https://numpy.org/doc/stable/reference/generated/numpy.matmul.html
为计算矢量的运算函数,输入需要为矢量格式。
区别于numpy.dot()
rgb转化为灰度图像(input: (32,32,3), output: (32,32))
from skimage import color
X_train_gray=color.rgb2gray(X_train[0])
用numpy添加所需维度(input:(32,32), output:(1,32,32,1)):
X_train_gray=X_train_gray.reshape(1,32,32,1)
用Let’s Encrypt配置https网站
以Ubuntu 18.04 LTS + Apache2为例:
python的一些小技巧
if语句的另外一种写法:
is_correct_string = 'Yes' if output == correct_output else 'No'
小桔灯博客开始了!
正如冰心先生的文章中的小桔灯一样,希望这里能成为在昏暗中照亮前行道路的一盏小小的亮灯!