时间序列预测——双向LSTM(Bi-LSTM)「建议收藏」

时间序列预测——双向LSTM(Bi-LSTM)「建议收藏」  本文展示了使用双向LSTM(Bi-LSTM)进行时间序列预测的全过程,包含详细的注释。整个过程主要包括:数据导入、数据清洗、结构转化、建立Bi-LSTM模型、训练模型(包括动态调整学习率和earlystopping的设置)、预测、结果展示、误差评估等完整的时间序列预测流程。  本文使用的数据集在本人上传的资源中,链接为mock_kaggle.csv代码如下:importpandasaspdimportnumpyasnpimportmathimportkerasfromma

大家好,又见面了,我是你们的朋友全栈君。

  本文展示了使用双向LSTM(Bi-LSTM)进行时间序列预测的全过程,包含详细的注释。整个过程主要包括:数据导入、数据清洗、结构转化、建立Bi-LSTM模型、训练模型(包括动态调整学习率和earlystopping的设置)、预测、结果展示、误差评估等完整的时间序列预测流程。
  本文使用的数据集在本人上传的资源中,链接为mock_kaggle.csv

代码如下:

import pandas as pd
import numpy as np
import math
import keras
from matplotlib import pyplot as plt
from matplotlib.pylab import mpl
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
from keras import backend as K
from keras.layers import LeakyReLU
from sklearn.metrics import mean_squared_error # 均方误差
from keras.callbacks import LearningRateScheduler
from keras.callbacks import EarlyStopping
from tensorflow.keras import Input, Model,Sequential
from keras.layers import Bidirectional#, Concatenate
mpl.rcParams['font.sans-serif'] = ['SimHei']   #显示中文
mpl.rcParams['axes.unicode_minus']=False       #显示负号

取数据

data=pd.read_csv('mock_kaggle.csv',encoding ='gbk',parse_dates=['datetime'])
Date=pd.to_datetime(data.datetime)
data['date'] = Date.map(lambda x: x.strftime('%Y-%m-%d'))
datanew=data.set_index(Date)
series = pd.Series(datanew['股票'].values, index=datanew['date'])
series
date
2014-01-01    4972
2014-01-02    4902
2014-01-03    4843
2014-01-04    4750
2014-01-05    4654
              ... 
2016-07-27    3179
2016-07-28    3071
2016-07-29    4095
2016-07-30    3825
2016-07-31    3642
Length: 937, dtype: int64

滞后扩充数据

dataframe1 = pd.DataFrame()
num_hour = 16
for i in range(num_hour,0,-1):
    dataframe1['t-'+str(i)] = series.shift(i)
dataframe1['t'] = series.values
dataframe3=dataframe1.dropna()
dataframe3.index=range(len(dataframe3))
dataframe3
t-16 t-15 t-14 t-13 t-12 t-11 t-10 t-9 t-8 t-7 t-6 t-5 t-4 t-3 t-2 t-1 t
0 4972.0 4902.0 4843.0 4750.0 4654.0 4509.0 4329.0 4104.0 4459.0 5043.0 5239.0 5118.0 4984.0 4904.0 4822.0 4728.0 4464
1 4902.0 4843.0 4750.0 4654.0 4509.0 4329.0 4104.0 4459.0 5043.0 5239.0 5118.0 4984.0 4904.0 4822.0 4728.0 4464.0 4265
2 4843.0 4750.0 4654.0 4509.0 4329.0 4104.0 4459.0 5043.0 5239.0 5118.0 4984.0 4904.0 4822.0 4728.0 4464.0 4265.0 4161
3 4750.0 4654.0 4509.0 4329.0 4104.0 4459.0 5043.0 5239.0 5118.0 4984.0 4904.0 4822.0 4728.0 4464.0 4265.0 4161.0 4091
4 4654.0 4509.0 4329.0 4104.0 4459.0 5043.0 5239.0 5118.0 4984.0 4904.0 4822.0 4728.0 4464.0 4265.0 4161.0 4091.0 3964
916 1939.0 1967.0 1670.0 1532.0 1343.0 1022.0 813.0 1420.0 1359.0 1075.0 1015.0 917.0 1550.0 1420.0 1358.0 2893.0 3179
917 1967.0 1670.0 1532.0 1343.0 1022.0 813.0 1420.0 1359.0 1075.0 1015.0 917.0 1550.0 1420.0 1358.0 2893.0 3179.0 3071
918 1670.0 1532.0 1343.0 1022.0 813.0 1420.0 1359.0 1075.0 1015.0 917.0 1550.0 1420.0 1358.0 2893.0 3179.0 3071.0 4095
919 1532.0 1343.0 1022.0 813.0 1420.0 1359.0 1075.0 1015.0 917.0 1550.0 1420.0 1358.0 2893.0 3179.0 3071.0 4095.0 3825
920 1343.0 1022.0 813.0 1420.0 1359.0 1075.0 1015.0 917.0 1550.0 1420.0 1358.0 2893.0 3179.0 3071.0 4095.0 3825.0 3642

921 rows × 17 columns

二折划分数据并标准化

pd.DataFrame(np.random.shuffle(dataframe3.values))  #shuffle
pot=len(dataframe3)-12
train=dataframe3[:pot]
test=dataframe3[pot:]
scaler = MinMaxScaler(feature_range=(0, 1)).fit(train)
#scaler = preprocessing.StandardScaler().fit(train)
train_norm=pd.DataFrame(scaler.fit_transform(train))
test_norm=pd.DataFrame(scaler.transform(test))
test_norm.shape,train_norm.shape
((12, 17), (909, 17))
X_train=train_norm.iloc[:,:-1]
X_test=test_norm.iloc[:,:-1]
Y_train=train_norm.iloc[:,-1:]
Y_test=test_norm.iloc[:,-1:]

转换为3维数据 [samples, timesteps, features]

source_x_train=X_train
source_x_test=X_test
X_train=X_train.values.reshape([X_train.shape[0],2,8]) #从(909, 16)-->(909, 2,8)
X_test=X_test.values.reshape([X_test.shape[0],2,8])  #从(12, 16)-->(12, 2,8)
Y_train=Y_train.values
Y_test=Y_test.values
X_train.shape,Y_train.shape
((909, 2, 8), (909, 1))
X_test.shape,Y_test.shape
((12, 2, 8), (12, 1))

动态调整学习率与提前终止函数

def scheduler(epoch):
    # 每隔50个epoch,学习率减小为原来的1/10
    if epoch % 50 == 0 and epoch != 0:
        lr = K.get_value(bilstm.optimizer.lr)
        if lr>1e-5:
            K.set_value(bilstm.optimizer.lr, lr * 0.1)
            print("lr changed to {}".format(lr * 0.1))
    return K.get_value(bilstm.optimizer.lr)

reduce_lr = LearningRateScheduler(scheduler)
early_stopping = EarlyStopping(monitor='loss', 
                               patience=20, 
                               min_delta=1e-5,
                               mode='auto',
                               restore_best_weights=False,#是否从具有监测数量的最佳值的时期恢复模型权重
                               verbose=2)

构造Bi-LSTM模型

# 特征数
input_size = X_train.shape[2]
# 时间步长:用多少个时间步的数据来预测下一个时刻的值
time_steps = X_train.shape[1]
# 隐藏层block的个数
cell_size = 128
batch_size=24

bilstm = keras.Sequential()
bilstm.add(Bidirectional(keras.layers.LSTM(
        units = cell_size, # 输出维度
        batch_input_shape=(batch_size, time_steps, input_size),# 输入维度
        stateful=False, #保持状态
        ), merge_mode='concat'))
bilstm.add(keras.layers.Dense(64))
bilstm.add(keras.layers.LeakyReLU(alpha=0.3))
bilstm.add(keras.layers.Dense(32))
bilstm.add(keras.layers.LeakyReLU(alpha=0.3))
bilstm.add(keras.layers.Dense(16))
bilstm.add(keras.layers.LeakyReLU(alpha=0.3))
# 输出层
bilstm.add(keras.layers.Dense(1))
bilstm.add(keras.layers.LeakyReLU(alpha=0.3))
# 定义优化器
nadam = keras.optimizers.Nadam(lr=1e-3)
bilstm.compile(optimizer=nadam, loss='mse', metrics=['accuracy'])

训练

history=bilstm.fit(X_train,Y_train, epochs=80,batch_size=32,callbacks=[reduce_lr])
Epoch 1/80
909/909 [==============================] - 3s 3ms/step - loss: 0.0200 - accuracy: 0.0187
Epoch 2/80
909/909 [==============================] - 1s 594us/step - loss: 0.0071 - accuracy: 0.0187
Epoch 3/80
909/909 [==============================] - 1s 611us/step - loss: 0.0057 - accuracy: 0.0187
Epoch 4/80
909/909 [==============================] - 1s 781us/step - loss: 0.0038 - accuracy: 0.0187
Epoch 5/80
909/909 [==============================] - 1s 719us/step - loss: 0.0037 - accuracy: 0.0187
Epoch 6/80
909/909 [==============================] - 1s 741us/step - loss: 0.0035 - accuracy: 0.0187
Epoch 7/80
909/909 [==============================] - 1s 576us/step - loss: 0.0040 - accuracy: 0.0187
Epoch 8/80
909/909 [==============================] - 1s 686us/step - loss: 0.0033 - accuracy: 0.01870s - loss: 0.0033 - accuracy: 0.01
Epoch 9/80
909/909 [==============================] - 1s 727us/step - loss: 0.0032 - accuracy: 0.0187
Epoch 10/80
909/909 [==============================] - 1s 652us/step - loss: 0.0030 - accuracy: 0.0187
Epoch 11/80
909/909 [==============================] - 1s 610us/step - loss: 0.0033 - accuracy: 0.0187
Epoch 12/80
909/909 [==============================] - 1s 573us/step - loss: 0.0031 - accuracy: 0.0187
Epoch 13/80
909/909 [==============================] - 1s 666us/step - loss: 0.0029 - accuracy: 0.0187
Epoch 14/80
909/909 [==============================] - 1s 552us/step - loss: 0.0028 - accuracy: 0.0187
Epoch 15/80
909/909 [==============================] - 1s 718us/step - loss: 0.0030 - accuracy: 0.0187
Epoch 16/80
909/909 [==============================] - 1s 601us/step - loss: 0.0028 - accuracy: 0.0187
Epoch 17/80
909/909 [==============================] - 0s 541us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 18/80
909/909 [==============================] - 1s 657us/step - loss: 0.0027 - accuracy: 0.0187
Epoch 19/80
909/909 [==============================] - 1s 680us/step - loss: 0.0027 - accuracy: 0.0187
Epoch 20/80
909/909 [==============================] - 1s 703us/step - loss: 0.0028 - accuracy: 0.0187
Epoch 21/80
909/909 [==============================] - 1s 602us/step - loss: 0.0030 - accuracy: 0.0187
Epoch 22/80
909/909 [==============================] - 1s 622us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 23/80
909/909 [==============================] - 1s 700us/step - loss: 0.0028 - accuracy: 0.0187
Epoch 24/80
909/909 [==============================] - 1s 613us/step - loss: 0.0025 - accuracy: 0.0187
Epoch 25/80
909/909 [==============================] - 1s 569us/step - loss: 0.0028 - accuracy: 0.0187
Epoch 26/80
909/909 [==============================] - 0s 525us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 27/80
909/909 [==============================] - 0s 487us/step - loss: 0.0028 - accuracy: 0.0187
Epoch 28/80
909/909 [==============================] - 0s 493us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 29/80
909/909 [==============================] - 0s 494us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 30/80
909/909 [==============================] - 0s 490us/step - loss: 0.0027 - accuracy: 0.0187
Epoch 31/80
909/909 [==============================] - 0s 519us/step - loss: 0.0026 - accuracy: 0.01870s - loss: 0.0026 - accuracy: 0.
Epoch 32/80
909/909 [==============================] - 0s 494us/step - loss: 0.0027 - accuracy: 0.0187
Epoch 33/80
909/909 [==============================] - 0s 493us/step - loss: 0.0025 - accuracy: 0.0187
Epoch 34/80
909/909 [==============================] - 0s 500us/step - loss: 0.0025 - accuracy: 0.0187
Epoch 35/80
909/909 [==============================] - 0s 505us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 36/80
909/909 [==============================] - 1s 595us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 37/80
909/909 [==============================] - 1s 578us/step - loss: 0.0027 - accuracy: 0.01870s - loss: 0.0025 - accuracy: 
Epoch 38/80
909/909 [==============================] - 0s 518us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 39/80
909/909 [==============================] - 0s 525us/step - loss: 0.0024 - accuracy: 0.0187
Epoch 40/80
909/909 [==============================] - 0s 501us/step - loss: 0.0024 - accuracy: 0.0187
Epoch 41/80
909/909 [==============================] - 0s 500us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 42/80
909/909 [==============================] - 0s 529us/step - loss: 0.0023 - accuracy: 0.0187
Epoch 43/80
909/909 [==============================] - 1s 616us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 44/80
909/909 [==============================] - 1s 596us/step - loss: 0.0027 - accuracy: 0.0187
Epoch 45/80
909/909 [==============================] - 1s 582us/step - loss: 0.0024 - accuracy: 0.0187: 0s - loss: 0.0012 - accu
Epoch 46/80
909/909 [==============================] - 0s 508us/step - loss: 0.0025 - accuracy: 0.0187
Epoch 47/80
909/909 [==============================] - 1s 574us/step - loss: 0.0025 - accuracy: 0.0187
Epoch 48/80
909/909 [==============================] - 1s 724us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 49/80
909/909 [==============================] - 1s 696us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 50/80
909/909 [==============================] - 1s 667us/step - loss: 0.0026 - accuracy: 0.0187
Epoch 51/80
lr changed to 0.00010000000474974513
909/909 [==============================] - 1s 653us/step - loss: 0.0023 - accuracy: 0.0187
Epoch 52/80
909/909 [==============================] - 1s 703us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 53/80
909/909 [==============================] - 1s 616us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 54/80
909/909 [==============================] - 1s 650us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 55/80
909/909 [==============================] - 1s 648us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 56/80
909/909 [==============================] - 1s 661us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 57/80
909/909 [==============================] - 1s 718us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 58/80
909/909 [==============================] - 1s 687us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 59/80
909/909 [==============================] - 1s 628us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 60/80
909/909 [==============================] - 1s 725us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 61/80
909/909 [==============================] - 1s 697us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 62/80
909/909 [==============================] - 1s 768us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 63/80
909/909 [==============================] - 1s 834us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 64/80
909/909 [==============================] - 1s 755us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 65/80
909/909 [==============================] - 1s 666us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 66/80
909/909 [==============================] - 1s 561us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 67/80
909/909 [==============================] - 1s 565us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 68/80
909/909 [==============================] - 1s 565us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 69/80
909/909 [==============================] - 1s 558us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 70/80
909/909 [==============================] - 0s 542us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 71/80
909/909 [==============================] - 0s 545us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 72/80
909/909 [==============================] - 1s 612us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 73/80
909/909 [==============================] - 1s 647us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 74/80
909/909 [==============================] - 1s 765us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 75/80
909/909 [==============================] - 1s 664us/step - loss: 0.0022 - accuracy: 0.01870s - loss: 0.0024 - accuracy: 0.
Epoch 76/80
909/909 [==============================] - 1s 817us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 77/80
909/909 [==============================] - 1s 693us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 78/80
909/909 [==============================] - 1s 726us/step - loss: 0.0022 - accuracy: 0.0187TA: 0s - loss: 0.0018 - ac
Epoch 79/80
909/909 [==============================] - 1s 681us/step - loss: 0.0022 - accuracy: 0.0187
Epoch 80/80
909/909 [==============================] - 1s 562us/step - loss: 0.0022 - accuracy: 0.0187
history.history.keys() #查看history中存储了哪些参数
plt.plot(history.epoch,history.history.get('loss')) #画出随着epoch增大loss的变化图

在这里插入图片描述

预测

predict = bilstm.predict(X_test)
real_predict=scaler.inverse_transform(np.concatenate((source_x_test,predict),axis=1))
real_y=scaler.inverse_transform(np.concatenate((source_x_test,Y_test),axis=1))
real_predict=real_predict[:,-1]
real_y=real_y[:,-1]

误差评估

plt.figure(figsize=(15,6))
bwith = 0.75 #边框宽度设置为0.75
ax = plt.gca()#获取边框
ax.spines['bottom'].set_linewidth(bwith)
ax.spines['left'].set_linewidth(bwith)
ax.spines['top'].set_linewidth(bwith)
ax.spines['right'].set_linewidth(bwith)
plt.plot(real_predict,label='real_predict')
plt.plot(real_y,label='real_y')
plt.plot(real_y*(1+0.15),label='15%上限',linestyle='--',color='green')
plt.plot(real_y*(1-0.15),label='15%下限',linestyle='--',color='green')
plt.fill_between(range(0,12),real_y*(1+0.15),real_y*(1-0.15),color='gray',alpha=0.2)
plt.legend()
plt.show()

在这里插入图片描述

round(mean_squared_error(Y_test,predict),4)
0.0012
from sklearn.metrics import r2_score
round(r2_score(real_y,real_predict),4)
0.5152
per_real_loss=(real_y-real_predict)/real_y
avg_per_real_loss=sum(abs(per_real_loss))/len(per_real_loss)
print(avg_per_real_loss)
0.12909395542298405
#计算指定置信水平下的预测准确率
#level为小数
def comput_acc(real,predict,level):
    num_error=0
    for i in range(len(real)):
        if abs(real[i]-predict[i])/real[i]>level:
            num_error+=1
    return 1-num_error/len(real)
comput_acc(real_y,real_predict,0.2),comput_acc(real_y,real_predict,0.15),comput_acc(real_y,real_predict,0.1)
(0.8333333333333334, 0.6666666666666667, 0.5833333333333333)
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/148994.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(1)


相关推荐

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号