vggnet pytorch_Javaweb项目

vggnet pytorch_Javaweb项目VGG网络是在2014年由牛津大学著名研究组VGG(VisualGeometryGroup)提出。

大家好,又见面了,我是你们的朋友全栈君。如果您正在找激活码,请点击查看最新教程,关注关注公众号 “全栈程序员社区” 获取激活教程,可能之前旧版本教程已经失效.最新Idea2022.1教程亲测有效,一键激活。

Jetbrains全家桶1年46,售后保障稳定

对应百度飞桨页面

VGG网络是在2014年由牛津大学著名研究组VGG (Visual Geometry Group) 提出。

下载花分类数据集

import requests
import os
import time
import sys

Jetbrains全家桶1年46,售后保障稳定

class DownloadZip():
    def __init__(self, url):
        self.url = url

    def download(self):
        # 记录文件下载开始时间
        start = time.time()
        # 获取当前执行文件所在的工作目录
        cwd = os.getcwd()
        # 文件存放位置
        data_root = os.path.join(cwd, 'work')
        print(data_root)
        if not os.path.exists(data_root):
            os.mkdir(data_root)
        data_root = os.path.join(data_root, 'flower_data')
        if not os.path.exists(data_root):
            os.mkdir(data_root)
        # 获取文件名
        file_name = self.url.split('/')[-1]

        temp_size = 0

        res = requests.get(self.url, stream=True)
        # 每次下载数据大小
        chunk_size = 1024
        total_size = int(res.headers.get("Content-Length"))

        if res.status_code == 200:
            # 换算单位并打印
            print('[文件大小]:%0.2f MB' % (total_size / chunk_size / 1024))
            # 保存下载文件
            with open(os.path.join(data_root, file_name), 'wb') as f:
                for chunk in res.iter_content(chunk_size=chunk_size):
                    if chunk:
                        temp_size += len(chunk)
                        f.write(chunk)
                        f.flush()
                        # 花哨的下载进度部分
                        done = int(50 * temp_size / total_size)
                        # 调用标准输出刷新命令行,看到\r 回车符了吧
                        # 相当于把每一行重新刷新一遍
                        sys.stdout.write(
                            "\r[%s%s] %d%%" % ('█' * done, ' ' * (50 - done), 100 * temp_size / total_size))
                        sys.stdout.flush()
            # 避免上面\r 回车符,执行完后需要换行了,不然都在一行显示
            print()
            # 结束时间
            end = time.time()
            print('全部下载完成!用时%.2f 秒' % (end - start))
        else:
            print(res.status_code)
if not os.path.exists(os.path.join(os.getcwd(), 'work', 'flower_data', 'flower_photos.tgz')):
    zip_url = 'http://download.tensorflow.org/example_images/flower_photos.tgz'
    segmentfault = DownloadZip(zip_url)
    segmentfault.download()
import tarfile

def un_tgz(filename):
    tar = tarfile.open(filename)
    print(os.path.splitext(filename)[0])
    tar.extractall(os.path.join(os.path.splitext(filename)[0], '..'))
    tar.close()


if not os.path.exists(os.path.join(os.getcwd(), 'work', 'flower_data', 'flower_photos')):
    os.mkdir(os.path.join(os.getcwd(), 'work', 'flower_data', 'flower_photos'))
    un_tgz(os.path.join(os.getcwd(), 'work', 'flower_data', 'flower_photos.tgz'))

数据集划分成训练集train和验证集val

from shutil import copy, rmtree
import random
from tqdm import tqdm
def mk_file(file_path: str):
    if os.path.exists(file_path):
        # 如果文件夹存在,则先删除原文件夹在重新创建
        rmtree(file_path)
    os.makedirs(file_path)  # 创建文件夹


def train_val():
    # 创建随机种子,目的为了复现试验
    random.seed(0)
    # 验证集所占数据集的比重
    split_rate = 0.1

    cwd = os.getcwd()  # 获取当前执行文件所在的工作目录
    data_root = os.path.join(cwd, 'work', 'flower_data')
    origin_flower_path = os.path.join(data_root, 'flower_photos')
    assert os.path.exists(origin_flower_path), "path '{}' does not exist.".format(origin_flower_path)
    # 列表推导式(if是为了除去文件夹中的非文件夹)
    flowers_class = [cla for cla in os.listdir(origin_flower_path)
                     if os.path.isdir(os.path.join(origin_flower_path, cla))]
    print(flowers_class)
    # 创建训练集文件夹
    train_root = os.path.join(data_root, 'train')
    mk_file(train_root)

    # 创建测试集文件夹
    val_root = os.path.join(data_root, 'val')
    mk_file(val_root)

    for cla in flowers_class:
        mk_file(os.path.join(train_root, cla))
        mk_file(os.path.join(val_root, cla))

    for cla in flowers_class:
        cla_path = os.path.join(origin_flower_path, cla)
        images = os.listdir(cla_path)
        num = len(images)
        eval_index = random.sample(images, k=int(num*split_rate))
        for index, image in tqdm(enumerate(images)):
            if image in eval_index:
                image_path = os.path.join(cla_path, image)
                new_path = os.path.join(val_root, cla)
            else:
                image_path = os.path.join(cla_path, image)
                new_path = os.path.join(train_root, cla)
            copy(image_path, new_path)

    print('Processing of data_split done!')
if not os.path.exists(os.path.join(os.getcwd(), 'work', 'flower_data', 'train')):
    train_val()
print('Data ready!')

vggnet pytorch_Javaweb项目

该网络中的亮点: 通过堆叠多个3×3的卷积核来替代大尺度卷积核(在拥有相同感受野的前提下能够减少所需参数)。

论文中提到,可以通过堆叠两层3×3的卷积核替代一层5×5的卷积核,堆叠三层3×3的卷积核替代一层7×7的卷积核。下面给出一个示例:使用7×7卷积核所需参数,与堆叠三个3×3卷积核所需参数(假设输入输出特征矩阵深度channel都为C)

如果使用一层卷积核大小为7的卷积所需参数(第一个C代表输入特征矩阵的channel,第二个C代表卷积核的个数也就是输出特征矩阵的深度):

7∗7∗C∗C=49C27 *7*C*C=49C ^27∗7∗C∗C=49C*C

如果使用三层卷积核大小为3的卷积所需参数:

3∗3∗C∗C+3∗3∗C∗C+3∗3∗C∗C=27C23*3*C*C+3*3*C*C+3*3*C*C=27C*C ^23∗3∗C∗C+3∗3∗C∗C+3∗3∗C∗C=27C*C

经过对比明显使用3层大小为3×3的卷积核比使用一层7×7的卷积核参数更少

vggnet pytorch_Javaweb项目

 Pytorch实现部分代码

model.py

import torch
import torch.nn as nn


# # official pretrain weights
# model_urls = {
#     'vgg11': 'https://download.pytorch.org/models/vgg11-bbd30ac9.pth',
#     'vgg13': 'https://download.pytorch.org/models/vgg13-c768596a.pth',
#     'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth',
#     'vgg19': 'https://download.pytorch.org/models/vgg19-dcbb9e9d.pth'
# }
# 上面这个参数都没有用,数据训练采用的是 1000,我们用的花分类的数据集只有五个类,输出的只有五个类
# predict 还是需要自己先训练一下

class VGG(nn.Module):
    def __init__(self, features, num_classes=1000, init_weights=False):
        super(VGG, self).__init__()
        self.features = features
        self.classifier = nn.Sequential(
            nn.Linear(512*7*7, 4096),
            nn.ReLU(inplace=True),
            nn.Dropout(p=0.5),
            nn.Linear(4096, 4096),
            nn.ReLU(inplace=True),
            nn.Dropout(p=0.5),
            nn.Linear(4096, num_classes),
        )
        if init_weights:
            self._initialize_weights()

    def _initialize_weights(self):
        for model in self.modules():
            if isinstance(model, nn.Conv2d):
                nn.init.xavier_uniform_(model.weight)
                if model.bias is not None:
                    nn.init.constant_(model.bias, 0)
            elif isinstance(model, nn.Linear):
                nn.init.xavier_uniform_(model.weight)
                nn.init.constant_(model.bias, 0)

    def forward(self, x):
        x = self.features(x)
        x = torch.flatten(x, start_dim=1)
        x = self.classifier(x)
        return x


def make_features(cfg: list):
    layer = []
    in_channels = 3
    for v in cfg:
        if v == 'M':
            layer += [nn.MaxPool2d(kernel_size=2, stride=2)]
        else:
            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
            layer += [conv2d, nn.ReLU(True)]
            in_channels = v
    return nn.Sequential(*layer)  # 非关键字参数


cfgs = {
    'vgg11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'vgg13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'vgg16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
    'vgg19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}


def vgg(model_name="vgg16", **kwargs):
    assert model_name in cfgs, "Warning: model number {} not in cfgs dict!".format(model_name)
    cfg = cfgs[model_name]
    model = VGG(make_features(cfg), **kwargs)
    return model

train.py

import os
import json

import torch
import torch.nn as nn
from torchvision import transforms, datasets
import torch.optim as optim
from tqdm import tqdm

from model import vgg


def main():
    device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
    print("using {} device.".format(device))

    data_transform = {
        'train': transforms.Compose(
            [
                transforms.RandomResizedCrop(224),
                transforms.RandomHorizontalFlip(),
                transforms.ToTensor(),
                transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
            ]
        ),
        'val': transforms.Compose(
            [
                transforms.Resize((224, 224)),
                transforms.ToTensor(),
                transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
            ]
        )
    }
    data_root = os.path.abspath(os.path.join(os.getcwd(), '..'))
    image_path = os.path.join(data_root, 'AlexNet', 'flower_data')
    assert os.path.exists(image_path), "{} path does not exist.".format(image_path)
    train_dataset = datasets.ImageFolder(root=os.path.join(image_path, "train"),
                                         transform=data_transform["train"])

    train_num = len(train_dataset)

    flower_list = train_dataset.class_to_idx
    cla_dict = dict((val, key) for key, val in flower_list.items())
    json_str = json.dumps(cla_dict, indent=4)
    with open('class_indices.json', 'w') as json_file:
        json_file.write(json_str)
    batch_size = 24
    nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8])
    print('Using {} dataloader workers every process'.format(nw))
    train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True,
                                               num_workers=nw)

    validate_dataset = datasets.ImageFolder(root=os.path.join(image_path, "val"),
                                            transform=data_transform["val"])
    val_num = len(validate_dataset)
    validate_loader = torch.utils.data.DataLoader(validate_dataset, batch_size=batch_size, shuffle=False,
                                                  num_workers=nw)
    print("using {} images for training, {} images for validation.".format(train_num, val_num))
    model_name = "vgg16"
    net = vgg(model_name=model_name, num_classes=5, init_weights=True)
    net.to(device)
    loss_function = nn.CrossEntropyLoss()
    optimizer = optim.Adam(net.parameters(), lr=0.0001)
    print(train_dataset[0][0].shape)
    print(train_dataset[0])
    epochs = 1
    best_acc = 0.0
    save_path = 'save_pth/{}Net.pth'.format(model_name)
    train_steps = len(train_loader)
    for epoch in range(epochs):
        net.train()
        running_loss = 0.0
        train_bar = tqdm(train_loader)
        for step, data in enumerate(train_bar):
            images, labels = data
            optimizer.zero_grad()
            outputs = net(images.to(device))
            loss = loss_function(outputs, labels.to(device))
            loss.backward()
            optimizer.step()
            running_loss += loss.item()
            train_bar.desc = "train epoch[{}/{}] loss:{:.3f}".format(epoch + 1, epochs, loss)


        net.eval()
        acc = 0.0
        with torch.no_grad():
            val_bar = tqdm(validate_loader)
            for val_data in val_bar:
                val_images, val_labels = val_data
                outputs = net(val_images.to(device))
                predict_y = torch.max(outputs, dim=1)[1]
                acc += torch.eq(predict_y, val_labels.to(device)).sum().item()

        val_accurate = acc / val_num
        print('[epoch %d] train_loss: %.3f  val_accuracy: %.3f' % (epoch + 1, running_loss / train_steps, val_accurate))

        if val_accurate > best_acc:
            best_acc = val_accurate
            torch.save(net.state_dict(), save_path)

    print('Finished Training')


if __name__ == '__main__':
    main()

predict.py

from model import vgg


def main():

    # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    device = torch.device("cpu")
    data_transform = transforms.Compose(
        [
            transforms.Resize((224, 224)),
            transforms.ToTensor(),
            transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
        ]
    )
    img_path = "./tulip.jpg"
    assert os.path.exists(img_path), "file {} does not exist.".format(img_path)

    img = Image.open(img_path)
    plt.imshow(img)

    img = data_transform(img)
    img = torch.unsqueeze(img, dim=0)

    json_path = './class_indices.json'
    assert os.path.exists(json_path), "file {} does not exist.".format(json_path)
    json_file = open(json_path, "r")
    class_indict = json.load(json_file)
    model = vgg(model_name="vgg16", num_classes=5).to(device)
    weights_path = 'save_pth/vgg16Net.pth'
    assert os.path.exists(weights_path), "file {} does not exist.".format(weights_path)
    model.load_state_dict(torch.load(weights_path, map_location=device))
    model.eval()
    with torch.no_grad():
        output = torch.squeeze(model(img.to(device))).cpu()
        predict = torch.softmax(output, dim=0)
        predict_cla = torch.argmax(predict).numpy()

    print_res = "class : {}  prob : {:.3}".format(class_indict[str(predict_cla)],
                                                  predict[predict_cla].numpy())
    plt.title(print_res)
    print(print_res)
    plt.show()


if __name__ == "__main__":
    main()

paddle

import json

import paddle
import paddle.nn as nn
import paddle.vision as torchvision
from paddle.vision import transforms, datasets
import paddle.optimizer as optim
from tqdm import tqdm
import paddle.fluid as fluid
import numpy as np
class VGG(nn.Layer):
    def __init__(self, features, num_classes=1000):
        super(VGG, self).__init__()
        weight_attr = paddle.framework.ParamAttr(initializer=paddle.nn.initializer.XavierNormal())
        bias_attr = paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Constant(0))
        self.features = features
        self.classifier = nn.Sequential(
            nn.Linear(512*7*7, 4096, weight_attr=weight_attr, bias_attr=bias_attr),
            nn.ReLU(),
            nn.Dropout(p=0.5),
            nn.Linear(4096, 4096, weight_attr=weight_attr, bias_attr=bias_attr),
            nn.ReLU(),
            nn.Dropout(p=0.5),
            nn.Linear(4096, num_classes, weight_attr=weight_attr, bias_attr=bias_attr),
        )

    def forward(self, x):
        x = self.features(x)
        x = paddle.flatten(x, start_axis=1)
        x = self.classifier(x)
        return x


def make_features(cfg: list):
    layer = []
    in_channels = 3
    weight_attr = paddle.framework.ParamAttr(initializer=paddle.nn.initializer.XavierNormal())
    bias_attr = paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Constant(0))
    for v in cfg:
        if v == 'M':
            layer += [nn.MaxPool2D(kernel_size=2, stride=2)]
        else:
            conv2d = nn.Conv2D(in_channels, v, kernel_size=3, padding=1, weight_attr=weight_attr, bias_attr=bias_attr)
            layer += [conv2d, nn.ReLU()]
            in_channels = v
    return nn.Sequential(*layer)  # 非关键字参数


cfgs = {
    'vgg11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'vgg13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'vgg16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
    'vgg19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}


def vgg(model_name="vgg16", **kwargs):
    assert model_name in cfgs, "Warning: model number {} not in cfgs dict!".format(model_name)
    cfg = cfgs[model_name]
    model = VGG(make_features(cfg), **kwargs)
    return model
net_test = vgg(model_name="vgg16", num_classes=5)
print(net_test)
print("--------------------------------------------------------")
print("named_parameters():")
for name, parameter in net_test.named_parameters():
    print(name, type(parameter), parameter.shape,parameter.dtype)
    # print(parameter)
# print("--------------------------------------------------------")
# print("sublayers():")
# for m in net_test.sublayers():
#     print(m)
def train_main():
    is_available = len(paddle.static.cuda_places()) > 0
    device = 'gpu:0' if is_available else 'cpu'
    print("using {} device.".format(device))

    data_transform = {
        'train': transforms.Compose(
            [
                transforms.RandomResizedCrop(224),
                transforms.RandomHorizontalFlip(),
                transforms.ToTensor(),
                transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
            ]
        ),
        'val': transforms.Compose(
            [
                transforms.Resize((224, 224)),
                transforms.ToTensor(),
                transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
            ]
        )
    }

    
    image_path = os.path.join(os.path.join(os.getcwd(), 'work', 'flower_data'))
    assert os.path.exists(image_path), "{} path does not exist.".format(image_path)
    train_dataset = datasets.DatasetFolder(root=os.path.join(image_path, "train"),
                                           transform=data_transform["train"])

    train_num = len(train_dataset)

    flower_list = train_dataset.class_to_idx
    # print(flower_list)
    cla_dict = dict((val, key) for key, val in flower_list.items())
    # print(cla_dict)
    json_str = json.dumps(cla_dict, indent=4)
    with open('class_indices.json', 'w') as json_file:
        json_file.write(json_str)
    batch_size = 32
    train_loader = paddle.io.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=0)

    
    validate_dataset = datasets.DatasetFolder(root=os.path.join(image_path, "val"), transform=data_transform["val"])
    val_num = len(validate_dataset)
    validate_loader = paddle.io.DataLoader(validate_dataset, batch_size=batch_size, shuffle=True, num_workers=0)
    
    print("using {} images for training, {} images for validation.".format(train_num, val_num))
    model_name = "vgg16"
    net = vgg(model_name=model_name, num_classes=5)
    loss_function = nn.CrossEntropyLoss()
    optimizer = optim.Adam(parameters=net.parameters(), learning_rate=0.0001)

    epochs = 5   # 训练次数
    best_acc = 0.0
    save_path = 'save_pth/{}Net.pth'.format(model_name)
    train_steps = len(train_loader)

    for epoch in range(epochs):
        net.train()
        running_loss = 0.0
        train_bar = tqdm(train_loader)
        for step, data in enumerate(train_bar):
            images, labels = data
            optimizer.clear_grad()
            outputs = net(images)
            loss = loss_function(outputs, labels)
            loss.backward()
            optimizer.step()
            a = loss.numpy()
            running_loss += a.item()
            train_bar.desc = "train epoch[{}/{}] loss:{:.3f}".format(epoch + 1, epochs, a.item())


        net.eval()
        acc = 0.0
        with paddle.no_grad():
            val_bar = tqdm(validate_loader)
            for val_data in val_bar:
                val_images, val_labels = val_data
                # print(val_labels)
                outputs = net(val_images)
                # print(outputs)
                predict_y = paddle.argmax(outputs, axis=1)
                # print(predict_y)
                b = paddle.equal(predict_y, val_labels)
                b = b.numpy()
                b = b.sum()
                acc += b


        val_accurate = acc / val_num
        print('[epoch %d] train_loss: %.3f  val_accuracy: %.3f' % (epoch + 1, running_loss / train_steps, val_accurate))

        if val_accurate > best_acc:
            best_acc = val_accurate
            paddle.save(net.state_dict(), save_path)

    print('Finished Training')
# 训练时间过长,不是每次预测都需要重新训练一下
Train_table = True

if Train_table: 
    use_gpu = True
    place = paddle.CUDAPlace(0) if use_gpu else paddle.CPUPlace()
    with fluid.dygraph.guard(place):
        train_main()
else:
    print("train_pth does exist.")
from PIL import Image
import matplotlib.pyplot as plt
def predict_main():
    dat_transform = transforms.Compose(
        [
            transforms.Resize((224, 224)),
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
        ]
    )
    img_path = './tulip.jpg'
    assert os.path.exists(img_path), "file {} does not exist.".format(img_path)
    img = Image.open(img_path)
    plt.imshow(img)

    img = dat_transform(img)
    img = paddle.unsqueeze(img, axis=0)
    json_path = './class_indices.json'
    assert os.path.exists(json_path), "file {} does not exist.".format(json_path)
    json_file = open(json_path, 'r')
    class_indict = json.load(json_file)
    weights_path = "save_pth/vgg16Net.pth"
    assert os.path.exists(weights_path),"file {} does not exist.".format(weights_path)
    model = vgg(model_name="vgg16", num_classes=5)
    model.set_state_dict(paddle.load(weights_path))
    model.eval()
    with paddle.no_grad():
        output = paddle.squeeze(model(img))
        # print(output)
        predict = paddle.nn.functional.softmax(output)
        predict_cla = paddle.argmax(predict).numpy()
    print(predict)
    # print(int(predict_cla))
    # print(class_indict)
    # print(class_indict[str(int(predict_cla))])
    # print(predict.numpy()[predict_cla].item())
    print_res = "class : {}  prob : {:.3}".format(class_indict[str(int(predict_cla))],predict.numpy()[predict_cla].item())

    plt.title(print_res)
    print(print_res)
    plt.show()

vggnet pytorch_Javaweb项目

vggnet pytorch_Javaweb项目

最后自己选的是个郁金香,说实话自己也看不出来这个像不像玫瑰,但是epochs调大后,VggNet对这幅图更加坚定的认是玫瑰,可能因为它红吧!

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/210450.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)
blank

相关推荐

  • Winrar去广告图文教程「建议收藏」

    Winrar去广告图文教程「建议收藏」一、前言1.1Winrar解压缩工具  市场上有很多优秀的压缩工具,常用的有Winrar和360压缩工具。Winrar是免费压缩工具,特色是每次使用都会弹出广告。影响用户体验和工作效率,当然最重要的是影响心情。效果如下图。图1-1、Winrar弹广告效果图二、问题处理说明2.1问题解决方式  此处使用工具Resourcehacker对winrar.e…

  • Java—重写与重载的区别

    Java—重写与重载的区别Java—重写与重载的区别这几周开始看Java的知识,发现有一个有趣的现象就是,前两天刚看过的知识点,过一天又忘掉了。而且很多东西堆在脑子里像浆糊一样。所以边学习边总结是很重要的,今天想写一篇关于重写和重载的博客,为什么?因为面试会问啊,这是基础中比较重要的地方,但我百度了几篇博客之后发现写的都差强人意,各有缺点,但是!!访问量都特别高,所以我决定自己好好总结一篇自己的博客,也算是给自己的学习…

  • 工业超纯水机:EDI超纯水设备技术介绍

    EDI超纯水设备技术是国际上20世纪90年代开始逐渐发展起来的新型纯水、超纯水制备技术。该技术巧妙地将电渗析技术和离子交换技术相融合,通过阴、阳离子的选择性透过作用与离子交换树脂对离子的交换作用,在直流电场的作用下实现离子的定向迁移,从而完成水的深度除盐,同事水电离解产生的氢离子和氧根离子对离子交换树脂进行再生,因此不需酸碱化学再生而能连续制取超纯水。  EDI设备特点  EDI系统运

  • vue父子组件传值 简单了解vuex

    vue父子组件传值 简单了解vuex一、vue的父子组件之间是如何传值的?首先呢,需要说说的是,vue既然有双向绑定,那为何会有父子组件之间的传值问题?这个问题也简单,vue的组件会供其他的vue页面进行调用,如果数组都是双向绑定的话,那么就容易混乱了,比如a,b页面绑了一个num=10,那b,c页面又绑了num=5,那vue实例的num到底听谁的?所以,这就是vue官网为什么说组件之间的数据只能是单项流通的,而且由父组件传递给…

  • pycharm永久激活码 2021.5【在线注册码/序列号/破解码】「建议收藏」

    pycharm永久激活码 2021.5【在线注册码/序列号/破解码】,https://javaforall.cn/100143.html。详细ieda激活码不妨到全栈程序员必看教程网一起来了解一下吧!

  • django笔记_django 异步

    django笔记_django 异步前言Django是一个开放源代码的Web应用框架,由Python写成,最初用于管理劳伦斯出版集团旗下的一些以新闻内容为主的网站,即CMS(内容管理系统)软件,于2005年7月在BSD许可证下发布,这

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号