parsererror是什么意思中文翻译python-Python etree.ParserError方法代码示例

parsererror是什么意思中文翻译python-Python etree.ParserError方法代码示例本文整理汇总了Python中lxml.etree.ParserError方法的典型用法代码示例。如果您正苦于以下问题:Pythonetree.ParserError方法的具体用法?Pythonetree.ParserError怎么用?Pythonetree.ParserError使用的例子?那么恭喜您,这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块lxml.e…

大家好,又见面了,我是你们的朋友全栈君。

本文整理汇总了Python中lxml.etree.ParserError方法的典型用法代码示例。如果您正苦于以下问题:Python etree.ParserError方法的具体用法?Python etree.ParserError怎么用?Python etree.ParserError使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块lxml.etree的用法示例。

在下文中一共展示了etree.ParserError方法的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: feed

​点赞 6

# 需要导入模块: from lxml import etree [as 别名]

# 或者: from lxml.etree import ParserError [as 别名]

def feed(self, markup):

if isinstance(markup, bytes):

markup = BytesIO(markup)

elif isinstance(markup, unicode):

markup = StringIO(markup)

# Call feed() at least once, even if the markup is empty,

# or the parser won”t be initialized.

data = markup.read(self.CHUNK_SIZE)

try:

self.parser = self.parser_for(self.soup.original_encoding)

self.parser.feed(data)

while len(data) != 0:

# Now call feed() on the rest of the data, chunk by chunk.

data = markup.read(self.CHUNK_SIZE)

if len(data) != 0:

self.parser.feed(data)

self.parser.close()

except (UnicodeDecodeError, LookupError, etree.ParserError), e:

raise ParserRejectedMarkup(str(e))

开发者ID:MarcelloLins,项目名称:ServerlessCrawler-VancouverRealState,代码行数:22,

示例2: feed

​点赞 6

# 需要导入模块: from lxml import etree [as 别名]

# 或者: from lxml.etree import ParserError [as 别名]

def feed(self, markup):

if isinstance(markup, bytes):

markup = BytesIO(markup)

elif isinstance(markup, str):

markup = StringIO(markup)

# Call feed() at least once, even if the markup is empty,

# or the parser won”t be initialized.

data = markup.read(self.CHUNK_SIZE)

try:

self.parser = self.parser_for(self.soup.original_encoding)

self.parser.feed(data)

while len(data) != 0:

# Now call feed() on the rest of the data, chunk by chunk.

data = markup.read(self.CHUNK_SIZE)

if len(data) != 0:

self.parser.feed(data)

self.parser.close()

except (UnicodeDecodeError, LookupError, etree.ParserError) as e:

raise ParserRejectedMarkup(str(e))

开发者ID:the-ethan-hunt,项目名称:B.E.N.J.I.,代码行数:22,

示例3: extract_html_content

​点赞 6

# 需要导入模块: from lxml import etree [as 别名]

# 或者: from lxml.etree import ParserError [as 别名]

def extract_html_content(self, html_body, fix_html=True):

“””Ingestor implementation.”””

if html_body is None:

return

try:

try:

doc = html.fromstring(html_body)

except ValueError:

# Ship around encoding declarations.

# https://stackoverflow.com/questions/3402520

html_body = self.RE_XML_ENCODING.sub(“”, html_body, count=1)

doc = html.fromstring(html_body)

except (ParserError, ParseError, ValueError):

raise ProcessingException(“HTML could not be parsed.”)

self.extract_html_header(doc)

self.cleaner(doc)

text = self.extract_html_text(doc)

self.result.flag(self.result.FLAG_HTML)

self.result.emit_html_body(html_body, text)

开发者ID:occrp-attic,项目名称:ingestors,代码行数:22,

示例4: ingest

​点赞 6

# 需要导入模块: from lxml import etree [as 别名]

# 或者: from lxml.etree import ParserError [as 别名]

def ingest(self, file_path):

“””Ingestor implementation.”””

file_size = self.result.size or os.path.getsize(file_path)

if file_size > self.MAX_SIZE:

raise ProcessingException(“XML file is too large.”)

try:

doc = etree.parse(file_path)

except (ParserError, ParseError):

raise ProcessingException(“XML could not be parsed.”)

text = self.extract_html_text(doc.getroot())

transform = etree.XSLT(self.XSLT)

html_doc = transform(doc)

html_body = html.tostring(html_doc, encoding=str, pretty_print=True)

self.result.flag(self.result.FLAG_HTML)

self.result.emit_html_body(html_body, text)

开发者ID:occrp-attic,项目名称:ingestors,代码行数:19,

示例5: _retrieve_html_page

​点赞 6

# 需要导入模块: from lxml import etree [as 别名]

# 或者: from lxml.etree import ParserError [as 别名]

def _retrieve_html_page(self):

“””

Download the requested player”s stats page.

Download the requested page and strip all of the comment tags before

returning a PyQuery object which will be used to parse the data.

Oftentimes, important data is contained in tables which are hidden in

HTML comments and not accessible via PyQuery.

Returns

——-

PyQuery object

The requested page is returned as a queriable PyQuery object with

the comment tags removed.

“””

url = self._build_url()

try:

url_data = pq(url)

except (HTTPError, ParserError):

return None

# For NFL, a 404 page doesn”t actually raise a 404 error, so it needs

# to be manually checked.

if “Page Not Found (404 error)” in str(url_data):

return None

return pq(utils._remove_html_comment_tags(url_data))

开发者ID:roclark,项目名称:sportsreference,代码行数:27,

示例6: _retrieve_html_page

​点赞 6

# 需要导入模块: from lxml import etree [as 别名]

# 或者: from lxml.etree import ParserError [as 别名]

def _retrieve_html_page(self):

“””

Download the requested player”s stats page.

Download the requested page and strip all of the comment tags before

returning a pyquery object which will be used to parse the data.

Returns

——-

PyQuery object

The requested page is returned as a queriable PyQuery object with

the comment tags removed.

“””

url = self._build_url()

try:

url_data = pq(url)

except (HTTPError, ParserError):

return None

return pq(utils._remove_html_comment_tags(url_data))

开发者ID:roclark,项目名称:sportsreference,代码行数:21,

示例7: _retrieve_html_page

​点赞 6

# 需要导入模块: from lxml import etree [as 别名]

# 或者: from lxml.etree import ParserError [as 别名]

def _retrieve_html_page(self):

“””

Download the requested player”s stats page.

Download the requested page and strip all of the comment tags before

returning a pyquery object which will be used to parse the data.

Returns

——-

PyQuery object

The requested page is returned as a queriable PyQuery object with

the comment tags removed.

“””

url = PLAYER_URL % self._player_id

try:

url_data = pq(url)

except (HTTPError, ParserError):

return None

return pq(utils._remove_html_comment_tags(url_data))

开发者ID:roclark,项目名称:sportsreference,代码行数:21,

示例8: _pull_conference_page

​点赞 6

# 需要导入模块: from lxml import etree [as 别名]

# 或者: from lxml.etree import ParserError [as 别名]

def _pull_conference_page(self, conference_abbreviation, year):

“””

Download the conference page.

Download the conference page for the requested conference and season

and create a PyQuery object.

Parameters

———-

conference_abbreviation : string

A string of the requested conference”s abbreviation, such as

“big-12”.

year : string

A string of the requested year to pull conference information from.

“””

try:

return pq(CONFERENCE_URL % (conference_abbreviation, year))

except (HTTPError, ParserError):

return None

开发者ID:roclark,项目名称:sportsreference,代码行数:21,

示例9: feed

​点赞 6

# 需要导入模块: from lxml import etree [as 别名]

# 或者: from lxml.etree import ParserError [as 别名]

def feed(self, markup):

if isinstance(markup, bytes):

markup = BytesIO(markup)

elif isinstance(markup, str):

markup = StringIO(markup)

# Call feed() at least once, even if the markup is empty,

# or the parser won”t be initialized.

data = markup.read(self.CHUNK_SIZE)

try:

self.parser = self.parser_for(self.soup.original_encoding)

self.parser.feed(data)

while len(data) != 0:

# Now call feed() on the rest of the data, chunk by chunk.

data = markup.read(self.CHUNK_SIZE)

if len(data) != 0:

self.parser.feed(data)

self.parser.close()

except (UnicodeDecodeError, LookupError, etree.ParserError) as e:

raise ParserRejectedMarkup(e)

开发者ID:Tautulli,项目名称:Tautulli,代码行数:22,

注:本文中的lxml.etree.ParserError方法示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/149151.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)


相关推荐

  • ICEM二维网格

    ICEM二维网格非结构网格结构网格拓扑拓扑完建立part边界条件,然后创建block拓扑完后进行边界条件关联全局网格设置转载于:https://www.cnblogs.com/Jay-CFD/p/8795203.html…

  • ajax写法_ajax一次请求多次响应

    ajax写法_ajax一次请求多次响应ajax—-js原生写法varxhr;functionrukou()//入口函数{xhr=getxhr();if(xhrnull){alert(“浏览器不支持!”);}varurl="";xhr.onreadystatechange=fanhui;xhr.open(“GET”,url,true);xhr.send(null);}functionfanhui…

  • 5分钟,6行代码教你写爬虫!(python)[通俗易懂]

    5分钟,6行代码教你写爬虫!(python)[通俗易懂]5分钟,6行代码教你写会爬虫!适用人士:对数据量需求不大,简单的从网站上爬些数据。好,不浪费时间了,开始!先来个例子:输入以下代码(共6行)importrequestsfromlxmlimporthtmlurl=’https://movie.douban.com/’#需要爬数据的网址page=requests.Session().get(url)tree=html.f

  • python处理异常的关键字_如果抛出异常应用哪些关键字

    python处理异常的关键字_如果抛出异常应用哪些关键字一.抛出异常Python用异常对象(exceptionobject)表示异常情况,遇到错误后,会引发异常。如果异常对象并未被处理或捕捉,程序就会用所谓的回溯(Traceback,一种错误信息)终止执行。raise语句Python中的raise关键字用于引发一个异常,基本上和C#和Java中的throw关键字相同,如下所示:importtracebackdefthrow_error():…

    2022年10月18日
  • 将cas-server-3.5.2 导入到myeclipse中

    将cas-server-3.5.2 导入到myeclipse中

  • 打开火狐浏览器之后主页自动跳转到2345网站首页

    打开火狐浏览器之后主页自动跳转到2345网站首页常在河边走,哪有不湿鞋。由于某款软件是收费的,需要下载绿色版,就在网上下载,作为老司机,每个界面还是很小心地、仔细的点击下一步,尽量不被迷惑了双眼,只下载所需要的软件。但没想到还是中招了,什么2345、好压王、高仿office和wps的一款(名字已忘记)、乱七八糟的输入法,头大,就一个个的删除掉,但没想到不仅仅是流氓软件偷偷下载,卸载那么简单,它还会偷偷的修改浏览器的主页,以及不知道的一些东西。会恶心你,可能这就是免费的代价。果真是天下没有免费的午餐。———————-

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号