Python 获取动漫番剧 -XXOO[通俗易懂]

Python 获取动漫番剧 -XXOO[通俗易懂]前言没有什么好说的,就是想起来前些年失恋使劲刷番剧缓解自己糟糕的情绪。纪念下。一、直接上代码1.搜索入口#搜索动漫名称列表defget_video_list(name):#开启代理#proxy={‘http’:’http://127.0.0.1:8080′,’https’:’https://127.0.0.1:8080′}url=’http://www.7666.tv/search.php?searchword=’+nam…

大家好,又见面了,我是你们的朋友全栈君。

 

前言

没有什么好说的,就是想起来前些年失恋使劲刷番剧缓解自己糟糕的情绪。纪念下。


一、直接上代码

1.搜索入口

# 搜索动漫名称 列表
def get_video_list(name):
    # 开启代理
    # proxy = {'http': 'http://127.0.0.1:8080', 'https': 'https://127.0.0.1:8080' }
    url = 'http://www.7666.tv/search.php?searchword=' + name + '&submit='
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    # url 中文转码
    url = url.replace(url.split("/")[-1].split(".")[0], quote(url.split("/")[-1].split(".")[0]))
    # 发起请求 , proxies=proxy
    response = requests.get(url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    html_obj = etree.HTML(response.text)
    v_list = html_obj.xpath('//ul[@class="myui-vodlist__media clearfix"]/li')
    counter = 0
    result = []
    result_head = ['序号', '名称', '类型', '时间', '链接', '简介']
    result.append(result_head)
    for v in v_list:
        thumb_a = v.xpath('//div[@class="thumb"]/a[@class="myui-vodlist__thumb img-lg-150 img-xs-100 lazyload"]')[
            counter]
        # 视频名称
        video_name = thumb_a.attrib.get('title')
        # 视频头像
        video_head = thumb_a.attrib.get('data-original')
        # 视频链接
        video_url = thumb_a.xpath('@href')[0]
        # 视频评分
        pattern = re.compile(r'\s+');
        thumb_span_g = thumb_a.xpath('//span[@class="pic-tag pic-tag-top"]')[counter]
        video_grade = re.sub(pattern, '', str(thumb_span_g.xpath('text()')[0]))
        # 视频最近更新
        thumb_span_u = thumb_a.xpath('//span[@class="pic-text text-right"]')[counter]
        video_update = re.sub(pattern, '', str(thumb_span_u.xpath('text()')[0]))

        detail_p = v.xpath('//div[@class="detail"]/p')
        # 视频导演
        video_director = detail_p[0].xpath('text()')[0]
        # 视频主演
        video_starring = detail_p[1][1].xpath('text()')[0]
        # 视频分类
        video_type = detail_p[2].xpath('text()')[0]
        # 视频地区
        video_address = detail_p[2][2].tail
        # 视频年份
        video_year = detail_p[2][4].tail
        # 视频简介
        video_synopsis = v.xpath('//div[@class="detail"]/p[@class="hidden-xs"]/text()')[counter]
        video_synopsis = video_synopsis.encode("gbk", 'ignore').decode("gbk", "ignore")
        counter = counter + 1
        # print(video_name, video_head, video_url, video_grade, video_update)
        # print(video_director, video_starring, video_type, video_address, video_year, video_synopsis)
        # print('\n')
        result.append([counter, video_name, video_type, video_year, video_url, video_synopsis])
    return result

2.视频集数

# 查询单个视频信息
def search_video_info(url):
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    # 发起请求
    response = requests.get(url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    html_obj = etree.HTML(response.text)
    v_list = html_obj.xpath('//ul[@class="myui-content__list scrollbar sort-list clearfix"]/li/a')
    result = []
    result_head = ['序号', '视频集数', '视频集数链接']
    result.append(result_head)
    counter = 1
    for v in v_list:
        # 视频集数
        video_set = v.xpath('text()')[0]
        # 视频集数链接
        video_set_url = v.xpath('@href')[0]
        vr = [counter, video_set, video_set_url]
        result.append(vr)
        counter = counter + 1
    return result

3.下载TS文件


# TS 流  m3u8
# 获取ts 路径列表
def search_video_ts(url):
    result_urls = []
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    # 发起请求
    response = requests.get(url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    html_obj = etree.HTML(response.text)
    # 获取ts流路径
    ts_info = html_obj.xpath('//div[@class="embed-responsive embed-responsive-16by9 clearfix"]/script/text()')[0]
    pattern = r"now=(.*)"
    m = re.findall(pattern, ts_info, re.I)
    # 当前视频的m3u8文件(不包含ts地址)
    ts_url = str(m).split(";")[0].replace("[", '').replace('"', '').replace("'", '')
    # 1.转发地址,发起请求
    response = requests.get(ts_url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 当前视频的m3u8m3u8文件(包含ts地址)
    ts_url_2 = response.text.split("\n")[2]
    # 拼接地址
    ts_url_2 = ts_url.split("index")[0] + ts_url_2
    # 2.ts地址列表,发起请求
    response = requests.get(ts_url_2)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    ts_list = response.text.split("\n")
    # https://sina.com-h-sina.com/20180812/8108_9a67fe52/1000k/hls/f9ebcf457c6000.ts
    for ts in ts_list:
        if (str.find(ts, '#') != -1) == False and len(ts) != 0:
            ts_url_3 = ts_url_2.split("index")[0] + ts
            result_urls.append(ts_url_3)
    return result_urls


def target_handel_download(start, end, name, url_list):
    # 开启代理
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    for url in url_list[start:end]:
        global count
        print('间隔一秒,开始下载ts文件>>>' + url + ', count>>' + str(count))
        time.sleep(1)
        try:
            r = requests.get(url, headers=headers, stream=True)
        except Exception as ex:
            print('下载ts文件>>>' + url + ' 异常,间隔5秒重新下载, err>>' + ex)
        else:
            with open(url.split("hls/")[1], "wb") as code:
                code.write(r.content)
            print("下载进度:%.2f" % (count / len(url_list)))
        count = count + 1



# TS 流  m3u8
# 获取ts 路径列表,下载,多线程
def download_video_ts(name, result_list, num_thread=100):
    global count
    count = 0
    # 下载ts文件
    file_size = len(result_list)
    # 启动多线程写文件
    part = file_size // num_thread  # 如果不能整除,最后一块应该多几个字节
    counter = 0
    for i in range(num_thread):
        start = part * i
        if i == num_thread - 1:  # 最后一块
            end = file_size
        else:
            end = start + part
        print('start>>' + str(start) + '  end>>' + str(end) + '   路径name>>' + name + '  ts文件数量>>' + str(len(result_list)))
        print('间隔5秒启动多线程写文件, 线程总数>>' + str(num_thread) + ', 当前线程>>' + str(i))
        time.sleep(5)
        t = threading.Thread(target=target_handel_download,
                             kwargs={'start': start, 'end': end, 'name': name, 'url_list': result_list})
        t.setDaemon(True)
        t.start()
        counter = counter + 1

    # 等待所有线程下载完成
    main_thread = threading.current_thread()
    # 所有存活的 Thread 对象
    for t in threading.enumerate():
        if t is main_thread:
            continue
        t.join()

 4.合并TS文件为mp4格式

# 合并小文件
# copy/b D:\newpython\doutu\sao\ts_files\*.ts d:\fnew.ts
# 在windows系统下面,直接可以使用:copy/b *.ts video.mp4  把所有ts文件合成一个mp4格式文件
def merge_ts_list(result_list, videoNamePy, name):
    tmp = []
    for file in result_list[0:568]:
        tmp.append(file.replace("\n", ""))
        # 合并ts文件
    # windows cmd 操作
    shell_str = 'copy /b *.ts ' + name + '.mp4' + '\n' + 'del ' + videoNamePy + '\*.ts'
    return shell_str


# 把合并命令写到文件中|也可直接执行
def wite_to_file(cmdString):
    f = open("combined.cmd", 'w', encoding="utf-8")
    f.write(cmdString)
    f.close()
    print('执行cmd命令............', cmdString)
    # 解决中文乱码
    os.system('chcp 65001')
    r = os.system(cmdString)
    print(r)

4.全部代码走一波

# encoding=utf-8
# 用python 视频下载器 
import requests
import re
import threading
import os
import datetime
import time
import sys
import pinyin.cedict
from lxml import etree
# from lxml import html
from urllib.parse import quote
# etree = html.etree
head_url = 'http://www.7666.tv'
count = 0
# 搜索动漫名称 列表
def get_video_list(name):
# 开启代理
# proxy = {'http': 'http://127.0.0.1:8080', 'https': 'https://127.0.0.1:8080' }
url = 'http://www.7666.tv/search.php?searchword=' + name + '&submit='
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Content-Type': 'application/x-www-form-urlencoded',
'Cookie': ""
}
# url 中文转码
url = url.replace(url.split("/")[-1].split(".")[0], quote(url.split("/")[-1].split(".")[0]))
# 发起请求 , proxies=proxy
response = requests.get(url, headers)
# 解决requests.get 网页中文乱码
response.encoding = response.apparent_encoding
# 得到搜索结果。获取视频信息
html_obj = etree.HTML(response.text)
v_list = html_obj.xpath('//ul[@class="myui-vodlist__media clearfix"]/li')
counter = 0
result = []
result_head = ['序号', '名称', '类型', '时间', '链接', '简介']
result.append(result_head)
for v in v_list:
thumb_a = v.xpath('//div[@class="thumb"]/a[@class="myui-vodlist__thumb img-lg-150 img-xs-100 lazyload"]')[
counter]
# 视频名称
video_name = thumb_a.attrib.get('title')
# 视频头像
video_head = thumb_a.attrib.get('data-original')
# 视频链接
video_url = thumb_a.xpath('@href')[0]
# 视频评分
pattern = re.compile(r'\s+');
thumb_span_g = thumb_a.xpath('//span[@class="pic-tag pic-tag-top"]')[counter]
video_grade = re.sub(pattern, '', str(thumb_span_g.xpath('text()')[0]))
# 视频最近更新
thumb_span_u = thumb_a.xpath('//span[@class="pic-text text-right"]')[counter]
video_update = re.sub(pattern, '', str(thumb_span_u.xpath('text()')[0]))
detail_p = v.xpath('//div[@class="detail"]/p')
# 视频导演
video_director = detail_p[0].xpath('text()')[0]
# 视频主演
video_starring = detail_p[1][1].xpath('text()')[0]
# 视频分类
video_type = detail_p[2].xpath('text()')[0]
# 视频地区
video_address = detail_p[2][2].tail
# 视频年份
video_year = detail_p[2][4].tail
# 视频简介
video_synopsis = v.xpath('//div[@class="detail"]/p[@class="hidden-xs"]/text()')[counter]
video_synopsis = video_synopsis.encode("gbk", 'ignore').decode("gbk", "ignore")
counter = counter + 1
# print(video_name, video_head, video_url, video_grade, video_update)
# print(video_director, video_starring, video_type, video_address, video_year, video_synopsis)
# print('\n')
result.append([counter, video_name, video_type, video_year, video_url, video_synopsis])
return result
# 查询单个视频信息
def search_video_info(url):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Content-Type': 'application/x-www-form-urlencoded',
'Cookie': ""
}
# 发起请求
response = requests.get(url, headers)
# 解决requests.get 网页中文乱码
response.encoding = response.apparent_encoding
# 得到搜索结果。获取视频信息
html_obj = etree.HTML(response.text)
v_list = html_obj.xpath('//ul[@class="myui-content__list scrollbar sort-list clearfix"]/li/a')
result = []
result_head = ['序号', '视频集数', '视频集数链接']
result.append(result_head)
counter = 1
for v in v_list:
# 视频集数
video_set = v.xpath('text()')[0]
# 视频集数链接
video_set_url = v.xpath('@href')[0]
vr = [counter, video_set, video_set_url]
result.append(vr)
counter = counter + 1
return result
# TS 流  m3u8
# 获取ts 路径列表
def search_video_ts(url):
result_urls = []
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Content-Type': 'application/x-www-form-urlencoded',
'Cookie': ""
}
# 发起请求
response = requests.get(url, headers)
# 解决requests.get 网页中文乱码
response.encoding = response.apparent_encoding
# 得到搜索结果。获取视频信息
html_obj = etree.HTML(response.text)
# 获取ts流路径
ts_info = html_obj.xpath('//div[@class="embed-responsive embed-responsive-16by9 clearfix"]/script/text()')[0]
pattern = r"now=(.*)"
m = re.findall(pattern, ts_info, re.I)
# 当前视频的m3u8文件(不包含ts地址)
ts_url = str(m).split(";")[0].replace("[", '').replace('"', '').replace("'", '')
# 1.转发地址,发起请求
response = requests.get(ts_url, headers)
# 解决requests.get 网页中文乱码
response.encoding = response.apparent_encoding
# 当前视频的m3u8m3u8文件(包含ts地址)
ts_url_2 = response.text.split("\n")[2]
# 拼接地址
ts_url_2 = ts_url.split("index")[0] + ts_url_2
# 2.ts地址列表,发起请求
response = requests.get(ts_url_2)
# 解决requests.get 网页中文乱码
response.encoding = response.apparent_encoding
# 得到搜索结果。获取视频信息
ts_list = response.text.split("\n")
# https://sina.com-h-sina.com/20180812/8108_9a67fe52/1000k/hls/f9ebcf457c6000.ts
for ts in ts_list:
if (str.find(ts, '#') != -1) == False and len(ts) != 0:
ts_url_3 = ts_url_2.split("index")[0] + ts
result_urls.append(ts_url_3)
return result_urls
def target_handel_download(start, end, name, url_list):
# 开启代理
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Content-Type': 'application/x-www-form-urlencoded',
'Cookie': ""
}
for url in url_list[start:end]:
global count
print('间隔一秒,开始下载ts文件>>>' + url + ', count>>' + str(count))
time.sleep(1)
try:
r = requests.get(url, headers=headers, stream=True)
except Exception as ex:
print('下载ts文件>>>' + url + ' 异常,间隔5秒重新下载, err>>' + ex)
else:
with open(url.split("hls/")[1], "wb") as code:
code.write(r.content)
print("下载进度:%.2f" % (count / len(url_list)))
count = count + 1
# TS 流  m3u8
# 获取ts 路径列表,下载,多线程
def download_video_ts(name, result_list, num_thread=100):
global count
count = 0
# 下载ts文件
file_size = len(result_list)
# 启动多线程写文件
part = file_size // num_thread  # 如果不能整除,最后一块应该多几个字节
counter = 0
for i in range(num_thread):
start = part * i
if i == num_thread - 1:  # 最后一块
end = file_size
else:
end = start + part
print('start>>' + str(start) + '  end>>' + str(end) + '   路径name>>' + name + '  ts文件数量>>' + str(len(result_list)))
print('间隔5秒启动多线程写文件, 线程总数>>' + str(num_thread) + ', 当前线程>>' + str(i))
time.sleep(5)
t = threading.Thread(target=target_handel_download,
kwargs={'start': start, 'end': end, 'name': name, 'url_list': result_list})
t.setDaemon(True)
t.start()
counter = counter + 1
# 等待所有线程下载完成
main_thread = threading.current_thread()
# 所有存活的 Thread 对象
for t in threading.enumerate():
if t is main_thread:
continue
t.join()
# 合并小文件
# copy/b D:\newpython\doutu\sao\ts_files\*.ts d:\fnew.ts
# 在windows系统下面,直接可以使用:copy/b *.ts video.mp4  把所有ts文件合成一个mp4格式文件
def merge_ts_list(result_list, videoNamePy, name):
tmp = []
for file in result_list[0:568]:
tmp.append(file.replace("\n", ""))
# 合并ts文件
# windows cmd 操作
shell_str = 'copy /b *.ts ' + name + '.mp4' + '\n' + 'del ' + videoNamePy + '\*.ts'
return shell_str
# 把合并命令写到文件中
def wite_to_file(cmdString):
f = open("combined.cmd", 'w', encoding="utf-8")
f.write(cmdString)
f.close()
print('执行cmd命令............', cmdString)
# 解决中文乱码
os.system('chcp 65001')
r = os.system(cmdString)
print(r)
# 展示视频搜索结果
def show_video_list(result):
print('------------------结果如下--------------------')
for r in result:
print(str(r).replace('[', '').replace(']', '').replace(',', ''))
print('\n')
# 验证是否数字
def is_number(s):
try:
int(s)
return True
except ValueError:
pass
try:
import unicodedata
unicodedata.numeric(s)
return True
except (TypeError, ValueError):
pass
return False
def download_vodeo_ts(videoName, cwd, content):
print('你选择的集数内容:',
str(content).replace('[', '').replace(']', '').replace(',', ''))
# 视频详情
print("正在爬取-{}-视频集数详情".format(str(content[1])))
# 视频集数下载ts
result_urls = search_video_ts(head_url + str(content[2]))
if len(result_urls) < 1:
print('视频TS-{}-,无内容!'.format(str(content[1])))
else:
# 转拼音
# videoNamePy = sys.path[0] + '\\' + pinyin.get(videoName, format="numerical")
videoNamePy = sys.path[0] + '\\' + videoName
# 下载TS文件
start = datetime.datetime.now().replace(microsecond=0)
print('下载 start..................>>', start)
print('文件名称>>>>>>>', videoNamePy)
if not os.path.exists(videoNamePy):
os.mkdir(videoNamePy)
os.chdir(videoNamePy)
else:
os.chdir(videoNamePy)
download_video_ts(videoNamePy, result_urls, 5)
end = datetime.datetime.now().replace(microsecond=0)
print('下载 end..................>>', end)
# 合并小文件
cmd = merge_ts_list(result_urls, videoNamePy, videoNamePy + '\\' + videoName + str(content[1]))
# 把合并命令写到文件中
wite_to_file(cmd)
print(str(content[1]) + "-视频下载完成")
# main
def download_vodeo_man():
cwd = os.getcwd()  # 获取当前目录即dir目录下
video_list = []
print("------------------------current working directory------------------" + cwd)
while True:
exit_flag = False
videoName = input("请输入动漫名称||输入exit退出:")
if 'exit' == videoName:
break
# 搜索动漫名称 列表
print('搜索限制3秒一次.....................')
time.sleep(3)
try:
video_list = get_video_list(videoName)
except Exception as ex:
print('搜索异常-{},请重新输入!'.format(ex))
if len(video_list) < 2:
print('没有找到你想要的视频,请重新输入!')
else:
show_video_list(video_list)
while True:
if exit_flag == True:
break
num = input("请选择你要下载的视频序号||键入t返回上一级:")
if num == 't':
break
elif is_number(num):
if int(num) > len(video_list):
print('没有找到你选择的序号,请重新输入:')
else:
content = video_list[int(num)]
video_name = content[1]
print('你选择的内容:', str(content).replace('[', '').replace(']', '').replace(',', ''))
# 视频详情
print("正在爬取-{}-视频详情".format(str(content[1])))
try:
result = search_video_info(head_url + str(content[4]))
except Exception:
print('视频详情异常,返回上一级!')
break
if len(result) < 2:
print('视频-{}-,无内容!'.format(str(content[1])))
else:
show_video_list(result)
while True:
num = input("请选择你要下载的视频集数序号||键入all下载所有||键入t返回上一级:")
if num == 't':
break
if num == 'all':
print('你选择下载所有集数')
number = 1
for content in result:
try:
print('开始下载-{}'.format(video_name+str(content[1])))
download_vodeo_ts(video_name, cwd, result[number])
except Exception as ex:
print('下载-{}-集数异常-{},继续下载!'.format(content[1], ex))
number = number + 1
exit_flag = True
break
elif int(num) > len(result):
print('没有找到你选择的请选择你要下载的视频集数序号序号,请重新输入:')
else:
content = result[int(num)]
try:
print('开始下载-{}'.format(video_name+str(content[1])))
download_vodeo_ts(video_name, cwd, content)
except Exception as ex:
print('下载-{}-集数异常-{},返回上一级!'.format(content[1], ex))
break
if __name__ == '__main__':
download_vodeo_man()
print("OK")

最后

我以为相爱的两个人分手,至少要有一件轰轰烈烈的大事,比如说第三者,比如说绝症,其实不用,忙碌疲乏,不安就够了。

感谢各位大大的耐心阅读~

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/160883.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)


相关推荐

  • 【FAQ】SpingMVC实现集合參数(Could not instantiate bean class [java.util.List])

    【FAQ】SpingMVC实现集合參数(Could not instantiate bean class [java.util.List])

  • vue源码实现的整体流程解析「建议收藏」

    vue源码实现的整体流程解析「建议收藏」一、前言最近一直在使用vue做项目,闲暇之余查阅了一些关于vue实现原理的资料,一方面对所了解到的知识做个总结,另外一方面希望能对看到此文章的同学有所帮助。本文如有不足之处,还请过往的大佬批评指正。

  • MySQL查看用户权限及权限管理

    MySQL查看用户权限及权限管理一、MySQL权限级别介绍全局——可以管理整个MySQL库——可以管理指定的数据库表——可以管理指定数据库的指定表字段——可以管理指定数据库的指定表的指定字段权限存储在mysql库的user,db,tables_priv,columns_priv,procs_priv这几个系统表中,待MySQL实例启动后就加载到内存中二、查看用户权限1、查看所有用户(用户名、给谁授权)…

  • js 彻底理解回调函数「建议收藏」

    一、前奏在谈回调函数之前,先看下下面两段代码:不妨猜测一下代码的结果。functionsay(value){alert(value);}alert(say);alert(say(‘hijs.’));如果你测试了,就会发现:只写变量名say返回的将会是say方法本身,以字符串的形式表现出来。而在变量名后加()如say()返回的就会使say方法调用后的结果,这里

  • linux 文件重命名的命令是什么_linux移动文件并重命名

    linux 文件重命名的命令是什么_linux移动文件并重命名就目前所知道的知识,有两种方法。一、rmrm命令最简单,也更好掌握。形如:rmoldnamenewname二、renamerename命令更加健壮,不仅支持普通的文件重命名,而且还支持模式匹配。在开发速度上给我们带来了很大的便利。形如:renamefromnametonamefilenames相关例子可以参考有关文章–linux下rename用法(批量重命名)

  • 网盘lua调用失败(dumb down)

    前阵子在弄一个dump程序的时候遇到这样一个问题.
    dump程序在被dump程序debug情况下一切正常,release版本下调用了minidumpwritedump函数进行dump,但是minidumpwritedump函数调用失败,GetLastError()获得到的是一个不正常的大数.
    在网络上久寻未果,遇到同样问题的人都没有提出最后的解决方案,也尝试过修改项目设置等一系列方法,没有解决.
     
    有一天想重新研究这个问题,看这个帖子http://app

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号