[文学阅读] METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

[文学阅读] METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

大家好,又见面了,我是全栈君,今天给大家准备了Idea注册码。

METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

Satanjeev Banerjee   Alon Lavie 
Language Technologies Institute  
Carnegie Mellon University  
Pittsburgh, PA 15213  
banerjee+@cs.cmu.edu  alavie@cs.cmu.edu


Important Snippets:

1. In  order  to  be  both  effective  and  useful,  an automatic metric for MT evaluation has to satisfy several basic criteria.  The primary and most intuitive requirement is that the metric have very high correlation with quantified human notions of MT quality.  Furthermore, a good metric should be as sensitive as possible to differences in MT quality between  different  systems,  and  between  different versions of the same system.  The metric should be 
consistent  (same  MT  system  on  similar  texts should produce similar scores), reliable (MT systems that score similarly can be trusted to perform similarly) and general (applicable to different MT tasks in a wide range of domains and scenarios).  Needless to say, satisfying all of the above criteria is  extremely  difficult,  and  all  of  the metrics  that have been proposed so far fall short of adequately addressing  most  if  not  all  of  these requirements.


2. It  is  based  on  an explicit word-to-word  matching  between  the  MT  output being evaluated and one or more reference translations.    Our  current  matching  supports  not  only matching  between  words that are  identical in the two  strings  being  compared,  but  can  also  match words  that  are  simple  morphological  variants  of each other


3. Each possible matching is scored based on a combination of several features.  These  currently  include  uni-gram-precision,  uni-gram-recall, and a direct measure of how out-of-order the words of the MT output are with respect to the reference. 


4.Furthermore, our results demonstrated that recall plays a more important role than precision  in  obtaining  high-levels  of  correlation  with human judgments. 


5.BLEU does not take recall into account directly.


6.BLEU  does  not  use  recall  because  the notion of recall is unclear when matching simultaneously  against  a  set  of  reference  translations (rather than a single reference).  To compensate for recall, BLEU uses a Brevity Penalty, which penalizes translations for being “too short”. 


7.BLEU  and  NIST  suffer  from  several  weaknesses:

   >The Lack of Recall

   >Use  of Higher Order  N-grams

   >Lack  of  Explicit  Word-matching  Between Translation and Reference

   >Use  of  Geometric  Averaging  of  N-grams


8.METEOR was designed to explicitly address the weaknesses in BLEU identified above.  It evaluates a  translation  by  computing  a  score  based  on  explicit  word-to-word  matches  between  the  translation and a reference translation. If more than one reference translation is available, the given translation  is  scored  against  each  reference  independently,  and  the  best  score  is  reported. 


9.Given a pair of translations to be compared (a system  translation  and  a  reference  translation), METEOR  creates  an alignment between  the  two strings. We define an alignment as a mapping be-tween unigrams, such that every unigram in each string  maps  to  zero  or  one  unigram  in  the  other string, and to no unigrams in the same string. 


10.This  alignment  is  incrementally  produced through a series of stages, each stage consisting of  two distinct phases. 


11.In the first phase an external module lists all the possible  unigram  mappings  between  the  two strings. 


12.Different modules map unigrams based  on  different  criteria.  The  “exact”  module maps  two  unigrams  if  they  are  exactly  the  same (e.g.  “computers”  maps  to  “computers”  but  not “computer”). The “porter stem” module maps two unigrams  if  they  are  the  same after they  are stemmed  using  the  Porter  stemmer  (e.g.:  “com-puters”  maps  to  both  “computers”  and  to  “com-puter”).  The  “WN  synonymy”  module  maps  two unigrams if they are synonyms of each other.


13.In  the  second  phase  of  each  stage,  the  largest subset of these unigram mappings is selected such 
that  the  resulting  set  constitutes  an alignment as defined above


14. METEOR selects that set that has the least number of unigram mapping crosses.


15.By default the first stage uses the “exact” mapping  module,  the  second  the  “porter  stem” module and the third the “WN synonymy” module.  

16. unigram precision (P)  

      unigram  recall  (R)  

      Fmean by combining the precision and recall via a harmonic-mean

      [文学阅读] METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

To  take  into  account  longer matches, METEOR computes a penalty for a given alignment as follows.

chunks such that  the  uni-grams  in  each  chunk  are  in  adjacent  positions  in the system translation, and are also mapped to uni-grams that are in adjacent positions in the reference translation. 

     [文学阅读] METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments 

    [文学阅读] METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments


Conclusion: METEOR prefer recall to precision while BLEU is converse.Meanwhile, it incorporates many information.

版权声明:本文博客原创文章,博客,未经同意,不得转载。

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/117748.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)


相关推荐

  • java bytebuffer flip_java string转byte

    java bytebuffer flip_java string转byteimportjava.nio.ByteBuffer;importjava.nio.ByteOrder;publicclassbytebuffertest{publicstaticvoidmain(String[]args){//CreateaByteBufferusingabytearraybyte[]bytes=newbyte[10];Byt

  • PyCharm 2022.01.13 永久激活码(JetBrains全家桶)

    (PyCharm 2022.01.13 永久激活码)2021最新分享一个能用的的激活码出来,希望能帮到需要激活的朋友。目前这个是能用的,但是用的人多了之后也会失效,会不定时更新的,大家持续关注此网站~IntelliJ2021最新激活注册码,破解教程可免费永久激活,亲测有效,下面是详细链接哦~https://javaforall.cn/100143.html…

  • 重磅!2021年国内Java培训机构排名前十最新出炉啦

    重磅!2021年国内Java培训机构排名前十最新出炉啦2021年国内Java培训机构排名前十的学校会是哪些呢?国内Java培训机构排名前十名该依据什么来评定呢?2021年国内Java培训机构排行榜排名的依据是按学员口碑、教学质量、就业率等多方面来进行评判,这次的排名是官方发布,具有权威性、公正性,可参考意义很强。下面,就为大家揭晓2021年最新的国内Java培训机构排名,这些机构在此次的评选活动中的得分又是多少呢。1、动力节点动力节点是成立于2009年,成立时间比较长,到现在为止还是只做Java单科教育,从动力节点毕业的程序员说讲的不错,创始人

  • c语言表白用代码(1)

    c语言表白用代码(1)不多说,直接上代码,有用拿走,侵权立删。希望大家尽早找到自己的另一半。#include<stdio.h>#include<math.h>#include<stdlib.h>#defineI20#defineR340#include<string.h>intmain(){charanswer[4];…

  • Python爬取热搜数据之炫酷可视化[通俗易懂]

    Python爬取热搜数据之炫酷可视化[通俗易懂]可视化展示看完记得点个赞哟微博炫酷可视化音乐组合版来了!项目介绍背景现阶段、抖音、快手、哗哩哗哩、微信公众号已经成为不少年轻人必备的“生活神器”。在21世纪的今天,你又是如何获取外界的信息资源的?相信很多小伙伴应该属于下面这一种类型的:事情要想知道快,抖音平台马上拍;微博热搜刷一刷,聚焦热点不愁卖;闲来发呆怎么办,B站抖音快手来;要是深夜无聊备,微信文章踩一踩;哈哈哈,小小的活跃一下气氛在这个万物互联的时代,已不再是那个“从前慢,车马慢….

  • 火车站的信息显示系统_列车到站播报

    火车站的信息显示系统_列车到站播报《火车站信息显示》

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号