TensorFlow车牌识别完整版(含车牌数据集)

2023-05-16

在之前发布的一篇博文《MNIST数据集实现车牌识别--初步演示版》中,我们演示了如何使用TensorFlow进行车牌识别,但是,当时采用的数据集是MNIST数字手写体,只能分类0-9共10个数字,无法分类省份简称和字母,局限性较大,无实际意义。

经过图像定位分割处理,博主收集了相关省份简称和26个字母的图片数据集,结合前述博文中贴出的python+TensorFlow代码,实现了完整的车牌识别功能。本着分享精神,在此送上全部代码和车牌数据集。

车牌数据集下载地址(约4000张图片):https://pan.baidu.com/s/1RyoMbHtLUlsMDsvLBCLZ2w

省份简称训练+识别代码(保存文件名为train-license-province.py)(拷贝代码请务必注意python文本缩进,只要有一处缩进错误,就无法得到正确结果,或者出现异常):

#!/usr/bin/python3.5
# -*- coding: utf-8 -*-  
 
import sys
import os
import time
import random
 
import numpy as np
import tensorflow as tf
 
from PIL import Image
 
 
SIZE = 1280
WIDTH = 32
HEIGHT = 40
NUM_CLASSES = 6
iterations = 300
 
SAVER_DIR = "train-saver/province/"
 
PROVINCES = ("京","闽","粤","苏","沪","浙")
nProvinceIndex = 0
 
time_begin = time.time()
 
 
# 定义输入节点,对应于图片像素值矩阵集合和图片标签(即所代表的数字)
x = tf.placeholder(tf.float32, shape=[None, SIZE])
y_ = tf.placeholder(tf.float32, shape=[None, NUM_CLASSES])
 
x_image = tf.reshape(x, [-1, WIDTH, HEIGHT, 1])
 
 
# 定义卷积函数
def conv_layer(inputs, W, b, conv_strides, kernel_size, pool_strides, padding):
    L1_conv = tf.nn.conv2d(inputs, W, strides=conv_strides, padding=padding)
    L1_relu = tf.nn.relu(L1_conv + b)
    return tf.nn.max_pool(L1_relu, ksize=kernel_size, strides=pool_strides, padding='SAME')
 
# 定义全连接层函数
def full_connect(inputs, W, b):
    return tf.nn.relu(tf.matmul(inputs, W) + b)
 
 
if __name__ =='__main__' and sys.argv[1]=='train':
    # 第一次遍历图片目录是为了获取图片总数
    input_count = 0
    for i in range(0,NUM_CLASSES):
        dir = './train_images/training-set/chinese-characters/%s/' % i           # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                input_count += 1
 
    # 定义对应维数和各维长度的数组
    input_images = np.array([[0]*SIZE for i in range(input_count)])
    input_labels = np.array([[0]*NUM_CLASSES for i in range(input_count)])
 
    # 第二次遍历图片目录是为了生成图片数据和标签
    index = 0
    for i in range(0,NUM_CLASSES):
        dir = './train_images/training-set/chinese-characters/%s/' % i          # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                filename = dir + filename
                img = Image.open(filename)
                width = img.size[0]
                height = img.size[1]
                for h in range(0, height):
                    for w in range(0, width):
                        # 通过这样的处理,使数字的线条变细,有利于提高识别准确率
                        if img.getpixel((w, h)) > 230:
                            input_images[index][w+h*width] = 0
                        else:
                            input_images[index][w+h*width] = 1
                input_labels[index][i] = 1
                index += 1
 
    # 第一次遍历图片目录是为了获取图片总数
    val_count = 0
    for i in range(0,NUM_CLASSES):
        dir = './train_images/validation-set/chinese-characters/%s/' % i           # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                val_count += 1
 
    # 定义对应维数和各维长度的数组
    val_images = np.array([[0]*SIZE for i in range(val_count)])
    val_labels = np.array([[0]*NUM_CLASSES for i in range(val_count)])
 
    # 第二次遍历图片目录是为了生成图片数据和标签
    index = 0
    for i in range(0,NUM_CLASSES):
        dir = './train_images/validation-set/chinese-characters/%s/' % i          # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                filename = dir + filename
                img = Image.open(filename)
                width = img.size[0]
                height = img.size[1]
                for h in range(0, height):
                    for w in range(0, width):
                        # 通过这样的处理,使数字的线条变细,有利于提高识别准确率
                        if img.getpixel((w, h)) > 230:
                            val_images[index][w+h*width] = 0
                        else:
                            val_images[index][w+h*width] = 1
                val_labels[index][i] = 1
                index += 1
    
    with tf.Session() as sess:
        # 第一个卷积层
        W_conv1 = tf.Variable(tf.truncated_normal([8, 8, 1, 16], stddev=0.1), name="W_conv1")
        b_conv1 = tf.Variable(tf.constant(0.1, shape=[16]), name="b_conv1")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 2, 2, 1]
        pool_strides = [1, 2, 2, 1]
        L1_pool = conv_layer(x_image, W_conv1, b_conv1, conv_strides, kernel_size, pool_strides, padding='SAME')
 
        # 第二个卷积层
        W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 16, 32], stddev=0.1), name="W_conv2")
        b_conv2 = tf.Variable(tf.constant(0.1, shape=[32]), name="b_conv2")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 1, 1, 1]
        pool_strides = [1, 1, 1, 1]
        L2_pool = conv_layer(L1_pool, W_conv2, b_conv2, conv_strides, kernel_size, pool_strides, padding='SAME')
 
 
        # 全连接层
        W_fc1 = tf.Variable(tf.truncated_normal([16 * 20 * 32, 512], stddev=0.1), name="W_fc1")
        b_fc1 = tf.Variable(tf.constant(0.1, shape=[512]), name="b_fc1")
        h_pool2_flat = tf.reshape(L2_pool, [-1, 16 * 20*32])
        h_fc1 = full_connect(h_pool2_flat, W_fc1, b_fc1)
 
 
        # dropout
        keep_prob = tf.placeholder(tf.float32)
 
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
 
 
        # readout层
        W_fc2 = tf.Variable(tf.truncated_normal([512, NUM_CLASSES], stddev=0.1), name="W_fc2")
        b_fc2 = tf.Variable(tf.constant(0.1, shape=[NUM_CLASSES]), name="b_fc2")
 
        # 定义优化器和训练op
        y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
        cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
        train_step = tf.train.AdamOptimizer((1e-4)).minimize(cross_entropy)
 
        correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
 
        # 初始化saver
        saver = tf.train.Saver()
 
        sess.run(tf.global_variables_initializer())
 
        time_elapsed = time.time() - time_begin
        print("读取图片文件耗费时间:%d秒" % time_elapsed)
        time_begin = time.time()
 
        print ("一共读取了 %s 个训练图像, %s 个标签" % (input_count, input_count))
 
        # 设置每次训练op的输入个数和迭代次数,这里为了支持任意图片总数,定义了一个余数remainder,譬如,如果每次训练op的输入个数为60,图片总数为150张,则前面两次各输入60张,最后一次输入30张(余数30)
        batch_size = 60
        iterations = iterations
        batches_count = int(input_count / batch_size)
        remainder = input_count % batch_size
        print ("训练数据集分成 %s 批, 前面每批 %s 个数据,最后一批 %s 个数据" % (batches_count+1, batch_size, remainder))
 
        # 执行训练迭代
        for it in range(iterations):
            # 这里的关键是要把输入数组转为np.array
            for n in range(batches_count):
                train_step.run(feed_dict={x: input_images[n*batch_size:(n+1)*batch_size], y_: input_labels[n*batch_size:(n+1)*batch_size], keep_prob: 0.5})
            if remainder > 0:
                start_index = batches_count * batch_size;
                train_step.run(feed_dict={x: input_images[start_index:input_count-1], y_: input_labels[start_index:input_count-1], keep_prob: 0.5})
 
            # 每完成五次迭代,判断准确度是否已达到100%,达到则退出迭代循环
            iterate_accuracy = 0
            if it%5 == 0:
                iterate_accuracy = accuracy.eval(feed_dict={x: val_images, y_: val_labels, keep_prob: 1.0})
                print ('第 %d 次训练迭代: 准确率 %0.5f%%' % (it, iterate_accuracy*100))
                if iterate_accuracy >= 0.9999 and it >= 150:
                    break;
 
        print ('完成训练!')
        time_elapsed = time.time() - time_begin
        print ("训练耗费时间:%d秒" % time_elapsed)
        time_begin = time.time()
 
        # 保存训练结果
        if not os.path.exists(SAVER_DIR):
            print ('不存在训练数据保存目录,现在创建保存目录')
            os.makedirs(SAVER_DIR)
        saver_path = saver.save(sess, "%smodel.ckpt"%(SAVER_DIR))
 
 
 
if __name__ =='__main__' and sys.argv[1]=='predict':
    saver = tf.train.import_meta_graph("%smodel.ckpt.meta"%(SAVER_DIR))
    with tf.Session() as sess:
        model_file=tf.train.latest_checkpoint(SAVER_DIR)
        saver.restore(sess, model_file)
 
        # 第一个卷积层
        W_conv1 = sess.graph.get_tensor_by_name("W_conv1:0")
        b_conv1 = sess.graph.get_tensor_by_name("b_conv1:0")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 2, 2, 1]
        pool_strides = [1, 2, 2, 1]
        L1_pool = conv_layer(x_image, W_conv1, b_conv1, conv_strides, kernel_size, pool_strides, padding='SAME')
 
        # 第二个卷积层
        W_conv2 = sess.graph.get_tensor_by_name("W_conv2:0")
        b_conv2 = sess.graph.get_tensor_by_name("b_conv2:0")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 1, 1, 1]
        pool_strides = [1, 1, 1, 1]
        L2_pool = conv_layer(L1_pool, W_conv2, b_conv2, conv_strides, kernel_size, pool_strides, padding='SAME')
 
 
        # 全连接层
        W_fc1 = sess.graph.get_tensor_by_name("W_fc1:0")
        b_fc1 = sess.graph.get_tensor_by_name("b_fc1:0")
        h_pool2_flat = tf.reshape(L2_pool, [-1, 16 * 20*32])
        h_fc1 = full_connect(h_pool2_flat, W_fc1, b_fc1)
 
 
        # dropout
        keep_prob = tf.placeholder(tf.float32)
 
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
 
 
        # readout层
        W_fc2 = sess.graph.get_tensor_by_name("W_fc2:0")
        b_fc2 = sess.graph.get_tensor_by_name("b_fc2:0")
 
        # 定义优化器和训练op
        conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
 
        for n in range(1,2):
            path = "test_images/%s.bmp" % (n)
            img = Image.open(path)
            width = img.size[0]
            height = img.size[1]
 
            img_data = [[0]*SIZE for i in range(1)]
            for h in range(0, height):
                for w in range(0, width):
                    if img.getpixel((w, h)) < 190:
                        img_data[0][w+h*width] = 1
                    else:
                        img_data[0][w+h*width] = 0
            
            result = sess.run(conv, feed_dict = {x: np.array(img_data), keep_prob: 1.0})
            max1 = 0
            max2 = 0
            max3 = 0
            max1_index = 0
            max2_index = 0
            max3_index = 0
            for j in range(NUM_CLASSES):
                if result[0][j] > max1:
                    max1 = result[0][j]
                    max1_index = j
                    continue
                if (result[0][j]>max2) and (result[0][j]<=max1):
                    max2 = result[0][j]
                    max2_index = j
                    continue
                if (result[0][j]>max3) and (result[0][j]<=max2):
                    max3 = result[0][j]
                    max3_index = j
                    continue
            
            nProvinceIndex = max1_index
            print ("概率:  [%s %0.2f%%]    [%s %0.2f%%]    [%s %0.2f%%]" % (PROVINCES[max1_index],max1*100, PROVINCES[max2_index],max2*100, PROVINCES[max3_index],max3*100))
            
        print ("省份简称是: %s" % PROVINCES[nProvinceIndex])

城市代号训练+识别代码(保存文件名为train-license-letters.py):


#!/usr/bin/python3.5
# -*- coding: utf-8 -*-  
 
import sys
import os
import time
import random
 
import numpy as np
import tensorflow as tf
 
from PIL import Image
 
 
SIZE = 1280
WIDTH = 32
HEIGHT = 40
NUM_CLASSES = 26
iterations = 500
 
SAVER_DIR = "train-saver/letters/"
 
LETTERS_DIGITS = ("A","B","C","D","E","F","G","H","J","K","L","M","N","P","Q","R","S","T","U","V","W","X","Y","Z","I","O")
license_num = ""
 
time_begin = time.time()
 
 
# 定义输入节点,对应于图片像素值矩阵集合和图片标签(即所代表的数字)
x = tf.placeholder(tf.float32, shape=[None, SIZE])
y_ = tf.placeholder(tf.float32, shape=[None, NUM_CLASSES])
 
x_image = tf.reshape(x, [-1, WIDTH, HEIGHT, 1])
 
 
# 定义卷积函数
def conv_layer(inputs, W, b, conv_strides, kernel_size, pool_strides, padding):
    L1_conv = tf.nn.conv2d(inputs, W, strides=conv_strides, padding=padding)
    L1_relu = tf.nn.relu(L1_conv + b)
    return tf.nn.max_pool(L1_relu, ksize=kernel_size, strides=pool_strides, padding='SAME')
 
# 定义全连接层函数
def full_connect(inputs, W, b):
    return tf.nn.relu(tf.matmul(inputs, W) + b)
 
 
if __name__ =='__main__' and sys.argv[1]=='train':
    # 第一次遍历图片目录是为了获取图片总数
    input_count = 0
    for i in range(0+10,NUM_CLASSES+10):
        dir = './train_images/training-set/letters/%s/' % i           # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                input_count += 1
 
    # 定义对应维数和各维长度的数组
    input_images = np.array([[0]*SIZE for i in range(input_count)])
    input_labels = np.array([[0]*NUM_CLASSES for i in range(input_count)])
 
    # 第二次遍历图片目录是为了生成图片数据和标签
    index = 0
    for i in range(0+10,NUM_CLASSES+10):
        dir = './train_images/training-set/letters/%s/' % i          # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                filename = dir + filename
                img = Image.open(filename)
                width = img.size[0]
                height = img.size[1]
                for h in range(0, height):
                    for w in range(0, width):
                        # 通过这样的处理,使数字的线条变细,有利于提高识别准确率
                        if img.getpixel((w, h)) > 230:
                            input_images[index][w+h*width] = 0
                        else:
                            input_images[index][w+h*width] = 1
                #print ("i=%d, index=%d" % (i, index))
                input_labels[index][i-10] = 1
                index += 1
 
    # 第一次遍历图片目录是为了获取图片总数
    val_count = 0
    for i in range(0+10,NUM_CLASSES+10):
        dir = './train_images/validation-set/%s/' % i           # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                val_count += 1
 
    # 定义对应维数和各维长度的数组
    val_images = np.array([[0]*SIZE for i in range(val_count)])
    val_labels = np.array([[0]*NUM_CLASSES for i in range(val_count)])
 
    # 第二次遍历图片目录是为了生成图片数据和标签
    index = 0
    for i in range(0+10,NUM_CLASSES+10):
        dir = './train_images/validation-set/%s/' % i          # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                filename = dir + filename
                img = Image.open(filename)
                width = img.size[0]
                height = img.size[1]
                for h in range(0, height):
                    for w in range(0, width):
                        # 通过这样的处理,使数字的线条变细,有利于提高识别准确率
                        if img.getpixel((w, h)) > 230:
                            val_images[index][w+h*width] = 0
                        else:
                            val_images[index][w+h*width] = 1
                val_labels[index][i-10] = 1
                index += 1
    
    with tf.Session() as sess:
        # 第一个卷积层
        W_conv1 = tf.Variable(tf.truncated_normal([8, 8, 1, 16], stddev=0.1), name="W_conv1")
        b_conv1 = tf.Variable(tf.constant(0.1, shape=[16]), name="b_conv1")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 2, 2, 1]
        pool_strides = [1, 2, 2, 1]
        L1_pool = conv_layer(x_image, W_conv1, b_conv1, conv_strides, kernel_size, pool_strides, padding='SAME')
 
        # 第二个卷积层
        W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 16, 32], stddev=0.1), name="W_conv2")
        b_conv2 = tf.Variable(tf.constant(0.1, shape=[32]), name="b_conv2")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 1, 1, 1]
        pool_strides = [1, 1, 1, 1]
        L2_pool = conv_layer(L1_pool, W_conv2, b_conv2, conv_strides, kernel_size, pool_strides, padding='SAME')
 
 
        # 全连接层
        W_fc1 = tf.Variable(tf.truncated_normal([16 * 20 * 32, 512], stddev=0.1), name="W_fc1")
        b_fc1 = tf.Variable(tf.constant(0.1, shape=[512]), name="b_fc1")
        h_pool2_flat = tf.reshape(L2_pool, [-1, 16 * 20*32])
        h_fc1 = full_connect(h_pool2_flat, W_fc1, b_fc1)
 
 
        # dropout
        keep_prob = tf.placeholder(tf.float32)
 
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
 
 
        # readout层
        W_fc2 = tf.Variable(tf.truncated_normal([512, NUM_CLASSES], stddev=0.1), name="W_fc2")
        b_fc2 = tf.Variable(tf.constant(0.1, shape=[NUM_CLASSES]), name="b_fc2")
 
        # 定义优化器和训练op
        y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
        cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
        train_step = tf.train.AdamOptimizer((1e-4)).minimize(cross_entropy)
 
        correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
 
        sess.run(tf.global_variables_initializer())
 
        time_elapsed = time.time() - time_begin
        print("读取图片文件耗费时间:%d秒" % time_elapsed)
        time_begin = time.time()
 
        print ("一共读取了 %s 个训练图像, %s 个标签" % (input_count, input_count))
 
        # 设置每次训练op的输入个数和迭代次数,这里为了支持任意图片总数,定义了一个余数remainder,譬如,如果每次训练op的输入个数为60,图片总数为150张,则前面两次各输入60张,最后一次输入30张(余数30)
        batch_size = 60
        iterations = iterations
        batches_count = int(input_count / batch_size)
        remainder = input_count % batch_size
        print ("训练数据集分成 %s 批, 前面每批 %s 个数据,最后一批 %s 个数据" % (batches_count+1, batch_size, remainder))
 
        # 执行训练迭代
        for it in range(iterations):
            # 这里的关键是要把输入数组转为np.array
            for n in range(batches_count):
                train_step.run(feed_dict={x: input_images[n*batch_size:(n+1)*batch_size], y_: input_labels[n*batch_size:(n+1)*batch_size], keep_prob: 0.5})
            if remainder > 0:
                start_index = batches_count * batch_size;
                train_step.run(feed_dict={x: input_images[start_index:input_count-1], y_: input_labels[start_index:input_count-1], keep_prob: 0.5})
 
            # 每完成五次迭代,判断准确度是否已达到100%,达到则退出迭代循环
            iterate_accuracy = 0
            if it%5 == 0:
                iterate_accuracy = accuracy.eval(feed_dict={x: val_images, y_: val_labels, keep_prob: 1.0})
                print ('第 %d 次训练迭代: 准确率 %0.5f%%' % (it, iterate_accuracy*100))
                if iterate_accuracy >= 0.9999 and it >= iterations:
                    break;
 
        print ('完成训练!')
        time_elapsed = time.time() - time_begin
        print ("训练耗费时间:%d秒" % time_elapsed)
        time_begin = time.time()
 
        # 保存训练结果
        if not os.path.exists(SAVER_DIR):
            print ('不存在训练数据保存目录,现在创建保存目录')
            os.makedirs(SAVER_DIR)
        # 初始化saver
        saver = tf.train.Saver()            
        saver_path = saver.save(sess, "%smodel.ckpt"%(SAVER_DIR))
 
 
 
if __name__ =='__main__' and sys.argv[1]=='predict':
    saver = tf.train.import_meta_graph("%smodel.ckpt.meta"%(SAVER_DIR))
    with tf.Session() as sess:
        model_file=tf.train.latest_checkpoint(SAVER_DIR)
        saver.restore(sess, model_file)
 
        # 第一个卷积层
        W_conv1 = sess.graph.get_tensor_by_name("W_conv1:0")
        b_conv1 = sess.graph.get_tensor_by_name("b_conv1:0")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 2, 2, 1]
        pool_strides = [1, 2, 2, 1]
        L1_pool = conv_layer(x_image, W_conv1, b_conv1, conv_strides, kernel_size, pool_strides, padding='SAME')
 
        # 第二个卷积层
        W_conv2 = sess.graph.get_tensor_by_name("W_conv2:0")
        b_conv2 = sess.graph.get_tensor_by_name("b_conv2:0")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 1, 1, 1]
        pool_strides = [1, 1, 1, 1]
        L2_pool = conv_layer(L1_pool, W_conv2, b_conv2, conv_strides, kernel_size, pool_strides, padding='SAME')
 
 
        # 全连接层
        W_fc1 = sess.graph.get_tensor_by_name("W_fc1:0")
        b_fc1 = sess.graph.get_tensor_by_name("b_fc1:0")
        h_pool2_flat = tf.reshape(L2_pool, [-1, 16 * 20*32])
        h_fc1 = full_connect(h_pool2_flat, W_fc1, b_fc1)
 
 
        # dropout
        keep_prob = tf.placeholder(tf.float32)
 
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
 
 
        # readout层
        W_fc2 = sess.graph.get_tensor_by_name("W_fc2:0")
        b_fc2 = sess.graph.get_tensor_by_name("b_fc2:0")
 
        # 定义优化器和训练op
        conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
 
        for n in range(2,3):
            path = "test_images/%s.bmp" % (n)
            img = Image.open(path)
            width = img.size[0]
            height = img.size[1]
 
            img_data = [[0]*SIZE for i in range(1)]
            for h in range(0, height):
                for w in range(0, width):
                    if img.getpixel((w, h)) < 190:
                        img_data[0][w+h*width] = 1
                    else:
                        img_data[0][w+h*width] = 0
            
            result = sess.run(conv, feed_dict = {x: np.array(img_data), keep_prob: 1.0})
            
            max1 = 0
            max2 = 0
            max3 = 0
            max1_index = 0
            max2_index = 0
            max3_index = 0
            for j in range(NUM_CLASSES):
                if result[0][j] > max1:
                    max1 = result[0][j]
                    max1_index = j
                    continue
                if (result[0][j]>max2) and (result[0][j]<=max1):
                    max2 = result[0][j]
                    max2_index = j
                    continue
                if (result[0][j]>max3) and (result[0][j]<=max2):
                    max3 = result[0][j]
                    max3_index = j
                    continue
            
            if n == 3:
                license_num += "-"
            license_num = license_num + LETTERS_DIGITS[max1_index]
            print ("概率:  [%s %0.2f%%]    [%s %0.2f%%]    [%s %0.2f%%]" % (LETTERS_DIGITS[max1_index],max1*100, LETTERS_DIGITS[max2_index],max2*100, LETTERS_DIGITS[max3_index],max3*100))
            
        print ("城市代号是: 【%s】" % license_num)

车牌编号训练+识别代码(保存文件名为train-license-digits.py):

#!/usr/bin/python3.5
# -*- coding: utf-8 -*-  
 
import sys
import os
import time
import random
 
import numpy as np
import tensorflow as tf
 
from PIL import Image
 
 
SIZE = 1280
WIDTH = 32
HEIGHT = 40
NUM_CLASSES = 34
iterations = 1000
 
SAVER_DIR = "train-saver/digits/"
 
LETTERS_DIGITS = ("0","1","2","3","4","5","6","7","8","9","A","B","C","D","E","F","G","H","J","K","L","M","N","P","Q","R","S","T","U","V","W","X","Y","Z")
license_num = ""
 
time_begin = time.time()
 
 
# 定义输入节点,对应于图片像素值矩阵集合和图片标签(即所代表的数字)
x = tf.placeholder(tf.float32, shape=[None, SIZE])
y_ = tf.placeholder(tf.float32, shape=[None, NUM_CLASSES])
 
x_image = tf.reshape(x, [-1, WIDTH, HEIGHT, 1])
 
 
# 定义卷积函数
def conv_layer(inputs, W, b, conv_strides, kernel_size, pool_strides, padding):
    L1_conv = tf.nn.conv2d(inputs, W, strides=conv_strides, padding=padding)
    L1_relu = tf.nn.relu(L1_conv + b)
    return tf.nn.max_pool(L1_relu, ksize=kernel_size, strides=pool_strides, padding='SAME')
 
# 定义全连接层函数
def full_connect(inputs, W, b):
    return tf.nn.relu(tf.matmul(inputs, W) + b)
 
 
if __name__ =='__main__' and sys.argv[1]=='train':
    # 第一次遍历图片目录是为了获取图片总数
    input_count = 0
    for i in range(0,NUM_CLASSES):
        dir = './train_images/training-set/%s/' % i           # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                input_count += 1
 
    # 定义对应维数和各维长度的数组
    input_images = np.array([[0]*SIZE for i in range(input_count)])
    input_labels = np.array([[0]*NUM_CLASSES for i in range(input_count)])
 
    # 第二次遍历图片目录是为了生成图片数据和标签
    index = 0
    for i in range(0,NUM_CLASSES):
        dir = './train_images/training-set/%s/' % i          # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                filename = dir + filename
                img = Image.open(filename)
                width = img.size[0]
                height = img.size[1]
                for h in range(0, height):
                    for w in range(0, width):
                        # 通过这样的处理,使数字的线条变细,有利于提高识别准确率
                        if img.getpixel((w, h)) > 230:
                            input_images[index][w+h*width] = 0
                        else:
                            input_images[index][w+h*width] = 1
                input_labels[index][i] = 1
                index += 1
 
    # 第一次遍历图片目录是为了获取图片总数
    val_count = 0
    for i in range(0,NUM_CLASSES):
        dir = './train_images/validation-set/%s/' % i           # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                val_count += 1
 
    # 定义对应维数和各维长度的数组
    val_images = np.array([[0]*SIZE for i in range(val_count)])
    val_labels = np.array([[0]*NUM_CLASSES for i in range(val_count)])
 
    # 第二次遍历图片目录是为了生成图片数据和标签
    index = 0
    for i in range(0,NUM_CLASSES):
        dir = './train_images/validation-set/%s/' % i          # 这里可以改成你自己的图片目录,i为分类标签
        for rt, dirs, files in os.walk(dir):
            for filename in files:
                filename = dir + filename
                img = Image.open(filename)
                width = img.size[0]
                height = img.size[1]
                for h in range(0, height):
                    for w in range(0, width):
                        # 通过这样的处理,使数字的线条变细,有利于提高识别准确率
                        if img.getpixel((w, h)) > 230:
                            val_images[index][w+h*width] = 0
                        else:
                            val_images[index][w+h*width] = 1
                val_labels[index][i] = 1
                index += 1
    
    with tf.Session() as sess:
        # 第一个卷积层
        W_conv1 = tf.Variable(tf.truncated_normal([8, 8, 1, 16], stddev=0.1), name="W_conv1")
        b_conv1 = tf.Variable(tf.constant(0.1, shape=[16]), name="b_conv1")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 2, 2, 1]
        pool_strides = [1, 2, 2, 1]
        L1_pool = conv_layer(x_image, W_conv1, b_conv1, conv_strides, kernel_size, pool_strides, padding='SAME')
 
        # 第二个卷积层
        W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 16, 32], stddev=0.1), name="W_conv2")
        b_conv2 = tf.Variable(tf.constant(0.1, shape=[32]), name="b_conv2")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 1, 1, 1]
        pool_strides = [1, 1, 1, 1]
        L2_pool = conv_layer(L1_pool, W_conv2, b_conv2, conv_strides, kernel_size, pool_strides, padding='SAME')
 
 
        # 全连接层
        W_fc1 = tf.Variable(tf.truncated_normal([16 * 20 * 32, 512], stddev=0.1), name="W_fc1")
        b_fc1 = tf.Variable(tf.constant(0.1, shape=[512]), name="b_fc1")
        h_pool2_flat = tf.reshape(L2_pool, [-1, 16 * 20*32])
        h_fc1 = full_connect(h_pool2_flat, W_fc1, b_fc1)
 
 
        # dropout
        keep_prob = tf.placeholder(tf.float32)
 
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
 
 
        # readout层
        W_fc2 = tf.Variable(tf.truncated_normal([512, NUM_CLASSES], stddev=0.1), name="W_fc2")
        b_fc2 = tf.Variable(tf.constant(0.1, shape=[NUM_CLASSES]), name="b_fc2")
 
        # 定义优化器和训练op
        y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
        cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
        train_step = tf.train.AdamOptimizer((1e-4)).minimize(cross_entropy)
 
        correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
 
        sess.run(tf.global_variables_initializer())
 
        time_elapsed = time.time() - time_begin
        print("读取图片文件耗费时间:%d秒" % time_elapsed)
        time_begin = time.time()
 
        print ("一共读取了 %s 个训练图像, %s 个标签" % (input_count, input_count))
 
        # 设置每次训练op的输入个数和迭代次数,这里为了支持任意图片总数,定义了一个余数remainder,譬如,如果每次训练op的输入个数为60,图片总数为150张,则前面两次各输入60张,最后一次输入30张(余数30)
        batch_size = 60
        iterations = iterations
        batches_count = int(input_count / batch_size)
        remainder = input_count % batch_size
        print ("训练数据集分成 %s 批, 前面每批 %s 个数据,最后一批 %s 个数据" % (batches_count+1, batch_size, remainder))
 
        # 执行训练迭代
        for it in range(iterations):
            # 这里的关键是要把输入数组转为np.array
            for n in range(batches_count):
                train_step.run(feed_dict={x: input_images[n*batch_size:(n+1)*batch_size], y_: input_labels[n*batch_size:(n+1)*batch_size], keep_prob: 0.5})
            if remainder > 0:
                start_index = batches_count * batch_size;
                train_step.run(feed_dict={x: input_images[start_index:input_count-1], y_: input_labels[start_index:input_count-1], keep_prob: 0.5})
 
            # 每完成五次迭代,判断准确度是否已达到100%,达到则退出迭代循环
            iterate_accuracy = 0
            if it%5 == 0:
                iterate_accuracy = accuracy.eval(feed_dict={x: val_images, y_: val_labels, keep_prob: 1.0})
                print ('第 %d 次训练迭代: 准确率 %0.5f%%' % (it, iterate_accuracy*100))
                if iterate_accuracy >= 0.9999 and it >= iterations:
                    break;
 
        print ('完成训练!')
        time_elapsed = time.time() - time_begin
        print ("训练耗费时间:%d秒" % time_elapsed)
        time_begin = time.time()
 
        # 保存训练结果
        if not os.path.exists(SAVER_DIR):
            print ('不存在训练数据保存目录,现在创建保存目录')
            os.makedirs(SAVER_DIR)
        # 初始化saver
        saver = tf.train.Saver()            
        saver_path = saver.save(sess, "%smodel.ckpt"%(SAVER_DIR))
 
 
 
if __name__ =='__main__' and sys.argv[1]=='predict':
    saver = tf.train.import_meta_graph("%smodel.ckpt.meta"%(SAVER_DIR))
    with tf.Session() as sess:
        model_file=tf.train.latest_checkpoint(SAVER_DIR)
        saver.restore(sess, model_file)
 
        # 第一个卷积层
        W_conv1 = sess.graph.get_tensor_by_name("W_conv1:0")
        b_conv1 = sess.graph.get_tensor_by_name("b_conv1:0")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 2, 2, 1]
        pool_strides = [1, 2, 2, 1]
        L1_pool = conv_layer(x_image, W_conv1, b_conv1, conv_strides, kernel_size, pool_strides, padding='SAME')
 
        # 第二个卷积层
        W_conv2 = sess.graph.get_tensor_by_name("W_conv2:0")
        b_conv2 = sess.graph.get_tensor_by_name("b_conv2:0")
        conv_strides = [1, 1, 1, 1]
        kernel_size = [1, 1, 1, 1]
        pool_strides = [1, 1, 1, 1]
        L2_pool = conv_layer(L1_pool, W_conv2, b_conv2, conv_strides, kernel_size, pool_strides, padding='SAME')
 
 
        # 全连接层
        W_fc1 = sess.graph.get_tensor_by_name("W_fc1:0")
        b_fc1 = sess.graph.get_tensor_by_name("b_fc1:0")
        h_pool2_flat = tf.reshape(L2_pool, [-1, 16 * 20*32])
        h_fc1 = full_connect(h_pool2_flat, W_fc1, b_fc1)
 
 
        # dropout
        keep_prob = tf.placeholder(tf.float32)
 
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
 
 
        # readout层
        W_fc2 = sess.graph.get_tensor_by_name("W_fc2:0")
        b_fc2 = sess.graph.get_tensor_by_name("b_fc2:0")
 
        # 定义优化器和训练op
        conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
 
        for n in range(3,8):
            path = "test_images/%s.bmp" % (n)
            img = Image.open(path)
            width = img.size[0]
            height = img.size[1]
 
            img_data = [[0]*SIZE for i in range(1)]
            for h in range(0, height):
                for w in range(0, width):
                    if img.getpixel((w, h)) < 190:
                        img_data[0][w+h*width] = 1
                    else:
                        img_data[0][w+h*width] = 0
            
            result = sess.run(conv, feed_dict = {x: np.array(img_data), keep_prob: 1.0})
            
            max1 = 0
            max2 = 0
            max3 = 0
            max1_index = 0
            max2_index = 0
            max3_index = 0
            for j in range(NUM_CLASSES):
                if result[0][j] > max1:
                    max1 = result[0][j]
                    max1_index = j
                    continue
                if (result[0][j]>max2) and (result[0][j]<=max1):
                    max2 = result[0][j]
                    max2_index = j
                    continue
                if (result[0][j]>max3) and (result[0][j]<=max2):
                    max3 = result[0][j]
                    max3_index = j
                    continue
            
            license_num = license_num + LETTERS_DIGITS[max1_index]
            print ("概率:  [%s %0.2f%%]    [%s %0.2f%%]    [%s %0.2f%%]" % (LETTERS_DIGITS[max1_index],max1*100, LETTERS_DIGITS[max2_index],max2*100, LETTERS_DIGITS[max3_index],max3*100))
            
        print ("车牌编号是: 【%s】" % license_num)

保存好上面三个python脚本后,我们首先进行省份简称训练。在运行代码之前,需要先把数据集解压到训练脚本所在目录。然后,在命令行中进入脚本所在目录,输入执行如下命令:

python train-license-province.py train

训练结果如下:

 

然后进行省份简称识别,在命令行输入执行如下命令:

 

python train-license-province.py predict

 

 

执行城市代号训练(相当于训练26个字母):

 

python train-license-letters.py train


识别城市代号:

 

 

python train-license-letters.py predict

 

执行车牌编号训练(相当于训练24个字母+10个数字,我国交通法规规定车牌编号中不包含字母I和O):

 

 

python train-license-digits.py train


识别车牌编号:

 

 

python train-license-digits.py predict

 

可以看到,在测试图片上,识别准确率很高。识别结果是闽O-1672Q。

 

下图是测试图片的车牌原图:

 

关于 使用 opencv 做 车牌提取 及 分割 看我的另一篇 博客

https://blog.csdn.net/u011808673/article/details/78510692

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

TensorFlow车牌识别完整版(含车牌数据集) 的相关文章

  • 一本书,让我走上编程之路

    好多年了 xff0c 我的书架上一直留着这本书 xff0c 不是因为有多好 xff0c 而是它让我明白了许多 工作后的我读过无数本厚厚薄薄的书 xff0c 其中有的确实十分精彩 但唯一让我真正用心读过的 xff0c 是大学期间一本普通的c
  • ubuntu虚拟机终端(terminal)打不开

    最近想用cmake gui编译opencv xff0c 发现虚拟机上终端 xff08 terminal xff09 打不开 xff0c 点图标也打不开 xff0c ctrl 43 alt 43 t也没反应 然后百度了一下ctrl 43 al
  • Debian 用户名密码输入成功后重复登录

    问题描述 xff1a 在用户名和密码都输入成功的情况下 xff0c 重复出现登录界面 xff0c 无法进入主界面 问题原因 xff1a 原因为修改了 etc environment 环境变量 xff0c 在文件末尾添加了 xff1a PAT
  • C++最佳实践之常用库介绍

    C 43 43 的常用库包括 xff1a algorithm chrono iostream future memory map queue unordered map regex set string sstream stdexcept
  • Android GB905协议详解

    最近发现 xff0c 深圳做网约车和货车的协议的公司越来越多了 xff0c 之前在公司做过一些这方面的项目 就来写个这方面的文章记录下 xff0c 也顺便分享下 GB905 xff0c 主要是面向网约车的一种协议 xff0c 主要监控司机的
  • Python pip 命令清除Python包缓存文件

    Python pip 命令清除Python包缓存文件 命令行窗口中安装包命令调用方式示例 使用pip命令清除Python包缓存文件 命令行窗口中安装包命令 关于如何打开Windows 操作系统下的命令行窗口可以查看这篇 如何在Windows
  • hadoop集群搭建及其组件介绍和目录规划

    搭建HADOOP HA集群 集群角色分配 角色描述角色IP兼职兼职描述NN1NameNode节点ip nn1rmResourceManagerNN2NameNode节点ip nn2his serverJobHistoryServerJN1J
  • ubuntu12.04环境下Floodlight+mininet搭建OpenFlow测试平台

    注 xff1a 此笔记为在学习SDN相关知识时的个人总结 xff0c 如需转载麻烦表明出处 xff0c 并附上连接 xff08 http blog csdn net sherkyoung article details 23540017 x
  • DERBY数据库环境搭建以及简单使用

    1 derby数据库 Apache Derby 项目的目标是构建一个完全用 Java 编程语言编写的 易于使用却适合大多数应用程序的开放源码数据库 特点 xff1a l 程序小巧 xff0c 基础引擎和内嵌的JDBC 驱动总共大约 2MB
  • 如何做好项目各干系人的管理及应对?

    如何更好地识别 分析和管理项目关系人 xff1f 主要有以下几个方面 xff1a 1 项目干系人的分析 一般对项目干系人的分析有2种方法 xff0c 方法一 xff1a 权利 xff08 影响 xff09 xff0c 即对项目可以产生影响的
  • floodlight添加模块实验

    元旦的时候发现floodlight 居然更新了 xff0c 吓坏我了 V0 9 是 12 年 10 月更新的 xff0c 然后在 14 年 12 月 30 日连续发布了 V0 91 和 V1 0 OTZ 根据release note 来看
  • OpenFlow1.3协议解析

    注 xff1a 此笔记为在学习OpenFlow协议时的个人总结 xff0c 存在诸多错误和不完善的地方 xff0c 后续会陆续完善 xff0c 仅作参考 如需转载麻烦表明出处 xff0c 并附上连接 xff08 http blog csdn
  • [python]从零开始学python——颜色的16进制于RGB之间的转换

    在学习openstack的时候 xff0c 发现openstack是python开发的 xff1b 学习mininet自定义拓扑 xff0c 发现mininet是python开发的 xff1b 看看ryu xff0c 还是python开发的
  • ubuntu14.04 升级内核

    转载地址 xff1a http blog csdn net u011884290 article details 52082809 前两周升级了ubuntu 16 04 出现了关机卡死的情况 xff0c 查了下可能是系统内核 xff08 4
  • linux firefox提示“firefox is already running”的解决方法

    背景解决方法 背景 linux下 xff0c 多用户通过vnc访问指定IP xff0c 比如192 168 2 94 不同用户执行firefox xff0c firefox进程存在且仅能存在一个 记录一下 xff0c 以备不时之需 解决方法
  • 论C++类对象赋值

    class Demo public Demo Demo int j cout lt lt 34 Demo int 34 lt lt j lt lt endl this gt i 61 j Demo const Demo amp d cout
  • gnuradio+b210实现FM收音机

    gnuradio 43 b210实现FM收音机 环境介绍FM接收1 Flow Graph2 变量参数 FM发射1 Flow Graph2 变量参数 环境介绍 NameVersiongnuradio3 7 14 0uhd3 15 0 0ubu
  • 图深度学习 Deep Learning on Graph

    深度学习在图上的应用 引言图神经网络图卷积网络卷积操作谱方法Efficiency AspectMultiple Graphs Aspect框架 Readout 操作改进与讨论注意力机制残差和跳跃连接 Residual and Jumping
  • Ubuntu16.04中文输入法安装初战

    最近刚给笔记本装了Ubuntu 43 win10双系统 xff0c 但是ubuntu16 04没有自带中文输入法 xff0c 所以经过网上的一些经验搜索整合 xff0c 分享一下安装中文输入法的心得 本文主要介绍了谷歌拼音跟ibus中文输入
  • GoDaddy与Namecheap域名注册商对比分析

    默默鸟已经有几天没有更新博客 xff0c 博客更新的少是因为我必须在更新之前想好了围绕博客的主题更新 xff0c 而不是想到哪个就更新哪些内容 xff0c 一来可能不符合博客的中心 xff0c 二来对于用户群也有影响 xff0c 同事显得不

随机推荐

  • ubuntu18安装vnc远程桌面服务

    安装 vnc4server xff0c xfce4 sudo apt install vnc4server xfce4 xfce4 goodies 安装完成后配置VNC登录密码 vncpasswd 启动VNC server vncserve
  • 国内国外常用的10个云服务器可视化管理面板

    如今无论是搭建网站 xff0c 还是部署小程序 xff0c 甚至一些企业应用都会用到云服务求或者独立服务器 对于很多希望利用网站 网络创业的 xff0c 也会用到服务器 xff0c 不过在使用服务器过程中 xff0c 我们对于服务器环境的配
  • 几个Windows强力卸载工具软件推荐

    对于我们有在使用Windows系统的时候 xff0c 是不是会主动或者被动的安装一些软件和插件工具 殊不知日积月累之后 xff0c 系统中的软件会越来越多 xff0c 甚至有很多我们安装几个月甚至几年都不会用到的 这些软件 xff0c 其实
  • 几款值得选的SSH客户端软件

    对于服务器运维工作来说 xff0c 我们少不了SSH远程客户端管理工具 我们在用哪款呢 xff1f 比如常见用的包括PuTTY XShell WindTerm等 xff0c 有很多的付费或者免费的 xff0c 既然有这么多免费且好用的为什么
  • 原生态Ubuntu部署LAMP环境 PHP8.1+MySQL+Apache

    如果我们部署WEB环境用于网站项目 xff0c 我们还是建议用成熟的一键包或者可视化面板这种 xff0c 毕竟软件的部署和后续的运维方便很多 但是 xff0c 如果我们有需要学习Linux环境的原理 xff0c 那还是要学会原生态部署软件的
  • Passwork适合多人协作团队的自建密码管理器

    如今互联网已经深入我们的工作和生活中 xff0c 从办公 购物 学习每天都会用到各种网站平台 各种APP客户端 各种软件账户 xff0c 这就离不开对各个平台账户的管理 我们应该也知道 xff0c 账户的安全是至关重要的 xff0c 如果账
  • 完整利用Rsync实现服务器/网站数据增量同步备份

    我们在选择VPS 服务器架设项目之后 xff0c 所有的项目 网站数据都需要我们自行备份和维护 xff0c 即便有些服务商有提供管理型服务器 xff0c 但是数据自行备份和管理才是较为靠谱的 无论是网站 xff0c 还是其他项目 xff0c
  • 整理Nginx/Apache服务器配置HTTPS SSL证书示范教程

    昨天我们看到百度发布 34 百度烽火算法升级 34 xff0c 提到网站如果被劫持或者拦截可能会降低网站的权重和排名等问题 这使得我们网站需要使用HTTPS SSL证书来减少被拦截劫持的风险 其实在早些时候我们已经看到很多浏览器都强制要求网
  • 6个免费DNS解析服务商评测分析 适用于网站域名解析应用

    这几天我们很多网友应该知道CloudXNS DNS解析服务商预计7月15日会宣布停止提供免费解析服务而主营商业服务 虽然网络上提供免费DNS解析服务商很多 xff0c 但是毕竟这么多年CloudXNS域名解析稳定性还是不错的 xff0c 而
  • 两种方法修改数据库myslq密码

    搞了很久终于修改数据库密码成功了 命令行修改root密码 xff1a mysql gt UPDATE mysql user SET password 61 PASSWORD 新密码 WHERE User 61 root mysql gt F
  • 关于学生课堂行为识别算法

    目前基于针对学校做了一款考生行为识别算法 xff0c 算法可以在服务器部署 xff0c 也可以在前端设备如Jetson RK等边缘设备运行 xff0c 目前算法已经投入使用 xff0c 算法效果如下 目前算法在 2080Ti 服务器运行效率
  • 获取imagefield 类型图片的路径

    绝对路径 request build absolute uri 图片 url 相对路径 图片 url
  • mmdetection 常用命令

    1 多卡训练 CUDA VISIBLE DEVICES 61 0 1 2 3 PORT 61 15200 tool dist train py configs py 4 2 普通测试 python tools test py configs
  • yolov5 导出onnx 忽略检测层

    def forward self x z 61 inference output for i in range self nl x i 61 self m i x i conv bs ny nx 61 x i shape x bs 255
  • python opencv 添加运动模糊

    在训练过程中增加 运动模糊 class MotionBlur object def init self p 61 0 5 degree 61 5 angle 61 45 self p 61 p self degree 61 degree s
  • pth 多类模型改成一类模型

    import torch import copy def change pth input pth out pth model dir 61 input pth checkpoint 61 torch load model dir mode
  • 使用opencv进行车牌提取及识别

    商业合作可联系 xff1a 547691062 64 qq com 目录 1车牌提取过程 1 1车辆图像获取1 2车牌定位1 3车牌字符分割2车牌提取 2 1灰度化2 2Candy边缘检测2 3形态学 xff08 膨胀腐蚀 xff09 处理
  • python 爬虫禁止访问解决方法(403)

    1 增加Header2 代理IP3 终极方法4 实例练习 5 更多思考 在上一篇博客中说到 xff0c 程序使用一段时间后会遇到HTTP Error 403 Forbidden错误 因为在短时间内直接使用Get获取大量数据 xff0c 会被
  • C++多线程编程

    c 43 43 11 之后有了标准的线程库 xff1a C 43 43 11发布之前 xff0c C 43 43 并没有对多线程编程的专门支持 xff0c C 43 43 11通过标准库引入了对多线程的支持 xff0c 大大方便了程序员的工
  • TensorFlow车牌识别完整版(含车牌数据集)

    在之前发布的一篇博文 MNIST数据集实现车牌识别 初步演示版 中 xff0c 我们演示了如何使用TensorFlow进行车牌识别 xff0c 但是 xff0c 当时采用的数据集是MNIST数字手写体 xff0c 只能分类0 9共10个数字