歡迎來到Linux教程網
Linux教程網
Linux教程網
Linux教程網
Linux教程網 >> Linux基礎 >> Linux教程 >> Linux單機運行hadoop平台

Linux單機運行hadoop平台

日期:2017/2/28 16:28:44   编辑:Linux教程

Hadoop-0.19.2的代碼可以到Apache上下載,使用的Linux機器是RHEL 5,Linux上安裝的Java版本為1.6.0_16,並且JAVA_HOME=/usr/java/jdk1.6.0_16
實踐過程

1、ssh無密碼驗證登陸localhost
保證Linux系統的ssh服務已經啟動,並保證能夠通過無密碼驗證登陸本機Linux系統。如果不能保證,可以按照如下的步驟去做:
(1)啟動命令行窗口,執行命令行:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
(2)ssh登陸localhost,執行命令行:
$ ssh localhost
第一次登錄,會提示你無法建立到127.0.0.1的連接,是否要建立,輸入yes即可,下面是能夠通過無密碼驗證登陸的信息:
[root@localhost hadoop-0.19.2]# ssh localhost
Last login: Sun Aug 1 18:35:37 2010 from 192.168.0.104
[root@localhost ~]#

2、Hadoop-0.19.0配置
下載hadoop-0.19.0.tar.gz,大約是40.3M,解壓縮到Linux系統指定目錄,www.linuxidc.com這裡我的是/root/hadoop-0.19.2目錄下。
下面按照有序的步驟來說明配置過程:
(1)修改hadoop-env.sh配置
將Java環境的配置進行修改後,並取消注釋“#”,修改後的行為:
export JAVA_HOME=/usr/java/jdk1.6.0_16
(2)修改hadoop-site.xml配置
在<configuration>與</configuration>加上3個屬性的配置,修改後的配置文件內容為:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

3運行wordcount實例
wordcount例子是hadoop發行包中自帶的實例,通過運行實例可以感受並嘗試理解hadoop在執行MapReduce任務時的執行過程。按照官方的“Hadoop Quick Start”教程基本可以容易地實現,下面簡單說一下我的練習過程。
導航到hadoop目錄下面,我的是/root/hadoop-0.19.0。
(1)格式化HDFS
執行格式化HDFS的命令行:
[root@localhost hadoop-0.19.2]# bin/hadoop namenode -format
格式化執行信息如下所示:
10/08/01 19:04:02 INFO namenode.NameNode: STARTUP_MSG:

Re-format filesystem in /tmp/hadoop-root/dfs/name ? (Y or N) y
Format aborted in /tmp/hadoop-root/dfs/name
10/08/01 19:04:05 INFO namenode.NameNode: SHUTDOWN_MSG:

(2)啟動Hadoop相關後台進程
執行命令行:
[root@localhost hadoop-0.19.2]# bin/start-all.sh
啟動執行信息如下所示:
starting namenode, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-namenode-localhost.out
localhost: starting datanode, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-datanode-localhost.out
localhost: starting secondarynamenode, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-secondarynamenode-localhost.out
starting jobtracker, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-jobtracker-localhost.out
localhost: starting tasktracker, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-tasktracker-localhost.out
(3)准備執行wordcount任務的數據
首先,這裡在本地創建了一個數據目錄input,並拷貝一些文件到該目錄下面,如下所示:
[root@localhost hadoop-0.19.2]# mkdir input
[root@localhost hadoop-0.19.2]# cp CHANGES.txt LICENSE.txt NOTICE.txt README.txt input/
然後,將本地目錄input上傳到HDFS文件系統上,執行如下命令:
[root@localhost hadoop-0.19.2]# bin/hadoop fs -put input/ input
(4)啟動wordcount任務
執行如下命令行:
[root@localhost hadoop-0.19.2]# bin/hadoop jar hadoop-0.19.2-examples.jar wordcount input output
元數據目錄為input,輸出數據目錄為output。
任務執行信息如下所示:
10/08/01 19:06:15 INFO mapred.FileInputFormat: Total input paths to process : 4
10/08/01 19:06:15 INFO mapred.JobClient: Running job: job_201008011904_0002
10/08/01 19:06:16 INFO mapred.JobClient: map 0% reduce 0%
10/08/01 19:06:22 INFO mapred.JobClient: map 20% reduce 0%
10/08/01 19:06:24 INFO mapred.JobClient: map 40% reduce 0%
10/08/01 19:06:25 INFO mapred.JobClient: map 60% reduce 0%
10/08/01 19:06:27 INFO mapred.JobClient: map 80% reduce 0%
10/08/01 19:06:28 INFO mapred.JobClient: map 100% reduce 0%
10/08/01 19:06:38 INFO mapred.JobClient: map 100% reduce 26%
10/08/01 19:06:40 INFO mapred.JobClient: map 100% reduce 100%
10/08/01 19:06:41 INFO mapred.JobClient: Job complete: job_201008011904_0002
10/08/01 19:06:41 INFO mapred.JobClient: Counters: 16
10/08/01 19:06:41 INFO mapred.JobClient: File Systems
10/08/01 19:06:41 INFO mapred.JobClient: HDFS bytes read=301489
10/08/01 19:06:41 INFO mapred.JobClient: HDFS bytes written=113098
10/08/01 19:06:41 INFO mapred.JobClient: Local bytes read=174004
10/08/01 19:06:41 INFO mapred.JobClient: Local bytes written=348172
10/08/01 19:06:41 INFO mapred.JobClient: Job Counters
10/08/01 19:06:41 INFO mapred.JobClient: Launched reduce tasks=1
10/08/01 19:06:41 INFO mapred.JobClient: Launched map tasks=5
10/08/01 19:06:41 INFO mapred.JobClient: Data-local map tasks=5
10/08/01 19:06:41 INFO mapred.JobClient: Map-Reduce Framework
10/08/01 19:06:41 INFO mapred.JobClient: Reduce input groups=8997
10/08/01 19:06:41 INFO mapred.JobClient: Combine output records=10860
10/08/01 19:06:41 INFO mapred.JobClient: Map input records=7363
10/08/01 19:06:41 INFO mapred.JobClient: Reduce output records=8997
10/08/01 19:06:41 INFO mapred.JobClient: Map output bytes=434077
10/08/01 19:06:41 INFO mapred.JobClient: Map input bytes=299871
10/08/01 19:06:41 INFO mapred.JobClient: Combine input records=39193
10/08/01 19:06:41 INFO mapred.JobClient: Map output records=39193
10/08/01 19:06:41 INFO mapred.JobClient: Reduce input records=10860
(5)查看任務執行結果
可以通過如下命令行:
bin/hadoop fs -cat output/*
執行結果,截取部分顯示如下所示:
vijayarenu 20
violations. 1
virtual 3
vis-a-vis 1
visible 1
visit 1
volume 1
volume, 1
volumes 2
volumes. 1
w.r.t 2
wait 9
waiting 6
waiting. 1
waits 3
want 1
warning 7
warning, 1
warnings 12
warnings. 3
warranties 1
warranty 1
warranty, 1
(6)終止Hadoop相關後台進程
執行如下命令行:
[root@localhost hadoop-0.19.2]# bin/stop-all.sh
執行信息如下所示:
stopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode
已經將上面列出的5個進程jobtracker、tasktracker、namenode、datanode、secondarynamenode
異常分析
在進行上述實踐過程中,可能會遇到某種異常情況,大致分析如下:
1、Call to localhost/127.0.0.1:9000 failed on local exception異常
(1)異常描述
可能你會在執行如下命令行的時候出現:
[root@localhost hadoop-0.19.2]# bin/hadoop jar hadoop-0.19.0-examples.jar wordcount input output
出錯異常信息如下所示:
10/08/01 19:50:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
10/08/01 19:50:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
10/08/01 19:50:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
10/08/01 19:50:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
10/08/01 19:50:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
10/08/01 19:51:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
10/08/01 19:51:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
10/08/01 19:51:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
10/08/01 19:51:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
10/08/01 19:51:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
java.lang.RuntimeException: java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local exception: Connection refused
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:323)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:295)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:268)
at org.apache.hadoop.examples.WordCount.run(WordCount.java:146)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:141)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:165)
at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
Caused by: java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local exception: Connection refused
at org.apache.hadoop.ipc.Client.call(Client.java:699)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at $Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:104)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:74)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1367)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:56)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1379)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:215)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:120)
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:319)
... 21 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:299)
at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:685)
... 33 more
(2)異常分析
從上述異常信息分析,這句是關鍵:
Retrying connect to server: localhost/127.0.0.1:9000.
是說在嘗試10次連接到“server”時都無法成功,這就說明到server的通信鏈路是不通的。我們已經在hadoop-site.xml中配置了namenode結點的值,如下所示:
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
所以,很可能namenode進程根本就沒有啟動,更不必談要執行任務了。
上述異常,過程是:
格式化了HDFS,但是沒有執行bin/start-all.sh,直接啟動wordcount任務,就出現上述異常。
所以,應該執行bin/start-all.sh以後再啟動wordcount任務。
2、Input path does not exist異常
(1)異常描述
當你在當前hadoop目錄下面創建一個input目錄,並cp某些文件到裡面,開始執行:
[root@localhost hadoop-0.19.2]# bin/hadoop namenode -format
[root@localhost hadoop-0.19.2]# bin/start-all.sh
這時候,你認為input已經存在,應該可以執行wordcount任務了:
[root@localhost hadoop-0.19.2]# bin/hadoop jar hadoop-0.19.0-examples.jar wordcount input output
結果拋出一堆異常,信息如下:
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/root/input
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:179)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:190)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:782)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1127)
at org.apache.hadoop.examples.WordCount.run(WordCount.java:149)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:141)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:165)
at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
上述異常,我模擬的過程是:
[root@localhost hadoop-0.19.2]# bin/hadoop fs -rmr input
Deleted hdfs://localhost:9000/user/root/input
[root@localhost hadoop-0.19.2]# bin/hadoop fs -rmr output
Deleted hdfs://localhost:9000/user/root/output
(2)異常分析
本地的input目錄並沒有上傳到HDFS上,所出現org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/root/input
只需要執行上傳的命令即可:
[root@localhost hadoop-0.19.2]# bin/hadoop fs -put input/ input

Copyright © Linux教程網 All Rights Reserved