大家好,又见面了,我是全栈君,今天给大家准备了Idea注册码。
Mapreduce 包
$ mvn clean install -DskipTests$ cd hadoop-mapreduce-project$ mvn clean install assembly:assembly -Pnative |
注意:你须要安装有protoc 2.5.0。
忽略本地建立mapreduce。你能够在maven中省略-Pnative參数。
tar包应该在target/directory。
配置环境
如果你已经安装hadoop-common/hadoop-hdfs,而且输出了$HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME,解压hadoop mapreduce 包,配置环境变量$HADOOP_MAPRED_HOME到要安装的文件夹。$HADOOP_YARN_HOME的配置和 $HADOOP_MAPRED_HOME一样.
注意:以下的操作如果你已经执行了hdfs。
设置配置信息
要启动ResourceManager and NodeManager, 你必须升级配置。如果你的 $HADOOP_CONF_DIR是配置文件夹。而且已经安装了HDFS和core-site.xml。还有2个配置文件你必须设置
mapred-site.xml
和yarn-site.xml
.设置
mapred-site.xml
加入以下的配置到你的
mapred-site.xml
.
<property> <name>mapreduce.cluster.temp.dir</name> <value></value> <description>No description</description> <final>true</final> </property> <property> <name>mapreduce.cluster.local.dir</name> <value></value> <description>No description</description> <final>true</final> </property>
设置
yarn-site.xml
加入以下的配置到你的
yarn-site.xml
.
<property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>host:port</value> <description>host is the hostname of the resource manager and port is the port on which the NodeManagers contact the Resource Manager. </description> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>host:port</value> <description>host is the hostname of the resourcemanager and port is the port on which the Applications in the cluster talk to the Resource Manager. </description> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> <description>In case you do not want to use the default scheduler</description> </property> <property> <name>yarn.resourcemanager.address</name> <value>host:port</value> <description>the host is the hostname of the ResourceManager and the port is the port on which the clients can talk to the Resource Manager. </description> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value></value> <description>the local directories used by the nodemanager</description> </property> <property> <name>yarn.nodemanager.address</name> <value>0.0.0.0:port</value> <description>the nodemanagers bind to this port</description> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>10240</value> <description>the amount of memory on the NodeManager in GB</description> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/app-logs</value> <description>directory on hdfs where the application logs are moved to </description> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value></value> <description>the directories used by Nodemanagers as log directories</description> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <description>shuffle service that needs to be set for Map Reduce to run </description> </property>
设置
capacity-scheduler.xml
确保你放置根队列到
capacity-scheduler.xml
.
<property> <name>yarn.scheduler.capacity.root.queues</name> <value>unfunded,default</value> </property> <property> <name>yarn.scheduler.capacity.root.capacity</name> <value>100</value> </property> <property> <name>yarn.scheduler.capacity.root.unfunded.capacity</name> <value>50</value> </property> <property> <name>yarn.scheduler.capacity.root.default.capacity</name> <value>50</value> </property>
执行守护进程
如果环境变量 $HADOOP_COMMON_HOME, $HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $HADOOP_YARN_HOME,$JAVA_HOME 和 $HADOOP_CONF_DIR 已经设置正确。$$YARN_CONF_DIR 的设置同 $HADOOP_CONF_DIR。
执行ResourceManager 和 NodeManager 例如以下:
$ cd $HADOOP_MAPRED_HOME $ sbin/yarn-daemon.sh start resourcemanager $ sbin/yarn-daemon.sh start nodemanager
你应该启动和执行。你能够执行randomwriter例如以下:
$ $HADOOP_COMMON_HOME/bin/hadoop jar hadoop-examples.jar randomwriter out
祝你好运。
发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/116713.html原文链接:https://javaforall.cn
【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛
【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...