前提:需要電腦安裝VM,且VM上安裝一個Linux系統
注意:本人是在學習完尚學堂視訊後,結合自己的理解,在這裡做的總結。學習的視訊是:巨量資料。
為了區分是在哪一臺機器做的操作,eg:- - - Linux 表示在Linux上做的操作。
(1) 設定網路卡檔案:
- - - Linux:
cd /etc/sysconfig/network-scripts/ vi ifcfg-eth0 # interface config { # HWADDR="00:0C:29:92:E5:B7" # 註釋這個,虛擬機器器需要註釋掉,公司不需要 # UUID="2d678a8b-6c40-4ebc-8f4e-245ef6b7a969" ONBOOT="yes" # 機器啟動時候網路卡啟動 BOOTPROTO=static # 使用靜態地址 IPADDR=192.168.9.8 NETMASK=255.255.255.0 GATEWAY=192.168.9.2 DNS1=114.114.114.114 }
- - - VMware:
虛擬網路編輯器 -> Net設定 -> 閘道器IP 192.168.9.2 子網IP:192.168.9.0 子網掩碼255.255.255.0 埠轉發:192.168.9.128(主機的)
將主機虛擬介面卡連線到此網路;# 主機Windows,虛擬介面卡->WMnet8(虛擬網路卡)
- - - Linux:
service network restart
測試:
Linux 是否能上網 : ping baidu.com Linux ping 主機: ping 192.168.9.128 主機pingLinux:ping 192.168.9.2
- - - Windows:
# VMnet8 IP地址:192.168.9.128 子網掩碼:255.225.255.0 DNS:和閘道器一樣或者114.114.114.114 or 8.8.8.8
(2) 關閉虛擬機器器防火牆(企業的話不關閉):
- - - Linux:
service iptables stop # 臨時關閉,防護牆屬於服務,重新開機後又會啟動
chkconfig iptables off # 永久關閉
chkconfig # 看iptables(命令列是3,圖形模式是5) windows -> 管理 -> 服務
(3) 關閉SELINUX
- - - Linux:
cd /etc/selinux/ vi config { SELINUX=disabled }
(4) DNS 域名解析
- - - Linux:
vi /etc/hosts { 192.168.9.11 node01 192.168.9.12 node02 192.168.9.13 node03 192.168,9.14 node04 }
(5) 刪除Mac地址,不然當現在這個使用eth0時候,另一個就是eth1了,又的重新設定
- - - Linux:
cd /etc/udev/rules.d/ cat 70-persistent-net.rules # 右鍵虛擬機器器 ->網路介面卡 ->高階->Mac地址 -> 00:0C:29:96:95:65 rm -f 70-persistent-net.rules # 為了克隆
(6) poweroff(克隆前別啟動)
小扳手 -> 拍攝快照 -> basic basic -> 克隆 -> 現有快照 -> 建立連結克隆 (克隆之前Mac地址是一樣的,但是啟動之後就不一樣了)
之後這個作為樣板機,克隆出4臺機器為:node01、node02、node03、node04
- - - Linux - node01:
vi /etc/sysconfig/network-scripts/ifcfg-eth0 { IPADDR=192.168.9.11 } vi /etc/sysconfig/network # 改完重啟後才會有效 { NETWORKING=yes HOSTNAME=node01 } vi /etc/hosts poweroff
- - - Linux - node02:
vi /etc/sysconfig/network-scripts/ifcfg-eth0 { IPADDR=192.168.9.12 } vi /etc/sysconfig/network { NETWORKING=yes HOSTNAME=node02 } vi /etc/hosts poweroff
node03、node04的地址是13、14,HOSTNAME改為對應的;
- - - Windows:修改hosts檔案:
C:\Windows\System32\drivers\etc { 192.168.9.11 node01 192.168.9.12 node02 192.168.9.13 node03 192.168.9.14 node04 }
參考網址:
# -> https://hadoop.apache.org/docs/r2.6.5/
# -> https://hadoop.apache.org/docs/r2.6.5/hadoop-project-dist/hadoop-common/SingleCluster.html
cat /etc/hosts
hostname
cat /etc/sysconfig/network
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa # 建立金鑰公鑰檔案(dsa型別) id_dsa id_dsa.pub cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys # 讀取家目錄的公鑰檔案然後重定向追加到到authorized_keys(將共鑰放到存取方的認證檔案裡)(不要多次執行,重複執行了把authorized_keys刪除一次) cat authorized_keys id_dsa.pub # 檢查是否一樣,公鑰和私鑰 ssh root@localhost # 登入自己 exit ssh root@node01
參考網址:
# https://blog.csdn.net/m0_54849806/article/details/123772220
# https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
準備:jdk-8u251-linux-i586.tar.gz(32位元的jdk)
mkdir /usr/java mv /root/Downloads/jdk-8u251-linux-i586.tar.gz /usr/java/ tar -zxvf /usr/java/jdk-8u251-linux-i586.tar.gz # 設定profile檔案 vi /etc/profile { export JAVA_HOME=/usr/java/jdk1.8.0_251/ export PATH=$PATH:$JAVA_HOME/bin # 先取出老的path,再拼接(:) } # 檢查 source /etc/profile # 也可是 . /etc/profile java -version whereis java jps
準備:hadoop-2.5.2.tar.gz
mkdir /usr/hadoop/ mv /root/Downloads/hadoop-2.5.2.tar.gz /usr/hadoop/ tar -zxvf /usr/hadoop/hadoop-2.5.2.tar.gz cd /usr/hadoop/hadoop-2.5.2/ # sbin bin vi /etc/profile { export JAVA_HOME=/usr/java/jdk1.8.0_251/ export HADOOP_HOME=/usr/hadoop/hadoop-2.5.2/ export PATH=$PATH:$JAVA_HOME/bin/:$HADOOP_HOME/bin/:$HADOOP_HOME/sbin/ } . /etc/profile hadoop # hdfs start # Tab
cd /usr/hadoop/hadoop-2.5.2/etc/hadoop/ vi hadoop-env.sh # 如果/etc/profile檔案沒有執行,${JAVA_HOME}不能取出值,所以需要二次的javahoem環境設定; { export JAVA_HOME=/usr/java/jdk1.8.0_251/ } vi mapred-env.sh { export JAVA_HOME=/usr/java/jdk1.8.0_251/ } vi yarn-env.sh { export JAVA_HOME=/usr/java/jdk1.8.0_251/ }
vi core-site.xml { <configuration> <property> <name>fs.defaultFS</name> # 決定Namenode在哪啟動 (檔案系統的入口:NameNode) <value>hdfs://node01:9000</value> # NameNode以哪個機器哪個埠啟動的,見到localhost反感,換成自己的名字:node01; </property> </configuration> }
vi hdfs-site.xml { <configuration> <property> <name>dfs.replication</name> <value>1</value> # 設定1個副本,偽分散式,節點只有一個,副本不能出現同一節點。 </property> </configuration> }
# 以上僅僅設定了NameNode節點在哪?在哪啟動?
# 設定DataNode
vi slaves
{
node01 # datanode在哪啟動。(localhost, 叢集的話這邊有多個)
}
vi core-site.xml { <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://node01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/var/sxt/hadoop/local/</value> # Namenode的持久化目錄;修改namenode存放持久化後設資料檔案的存放目錄;(這個目錄是空的也沒事,自己建立的) </property> </configuration> }
i hdfs-site.xml { <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> # secondarynamenode在哪啟動 <value>node01:50090</value> </property> </configuration> }
hdfs namenode -format # 執行前後jps不會有變化;而且會建立/var/sxt/hadoop/local/;注意報錯不搞錯都會輸出一大堆東西; # Storage directory /var/sxt/hadoop/local/dfs/name has been successfully formatted. cd /var/sxt/hadoop/local/dfs/name/ cd current/ ll { -rw-r--r-- 1 root root 351 Jun 10 05:18 fsimage_0000000000000000000 -rw-r--r-- 1 root root 62 Jun 10 05:18 fsimage_0000000000000000000.md5 -rw-r--r-- 1 root root 2 Jun 10 05:18 seen_txid -rw-r--r-- 1 root root 205 Jun 10 05:18 VERSION } cat VERSION { #Fri Jun 10 05:18:49 PDT 2022 namespaceID=1178112766 clusterID=CID-3ba8cea9-4994-4ad6-aff6-b159d0f716d1 cTime=0 storageType=NAME_NODE blockpoolID=BP-2116590704-192.168.9.11-1654863529163 # 連線池 layoutVersion=-57 # 在namenode這邊有這些資料了 }
start-dfs.sh { Java HotSpot(TM) Client VM warning: You have loaded library /usr/hadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 22/06/10 05:28:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [node01] node01: starting namenode, logging to /usr/hadoop/hadoop-2.5.2/logs/hadoop-root-namenode-node01.out node01: Java HotSpot(TM) Client VM warning: You have loaded library /usr/hadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. node01: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. node01: starting datanode, logging to /usr/hadoop/hadoop-2.5.2/logs/hadoop-root-datanode-node01.out Starting secondary namenodes [node01] node01: starting secondarynamenode, logging to /usr/hadoop/hadoop-2.5.2/logs/hadoop-root-secondarynamenode-node01.out node01: Java HotSpot(TM) Client VM warning: You have loaded library /usr/hadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. node01: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. Java HotSpot(TM) Client VM warning: You have loaded library /usr/hadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 22/06/10 05:29:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable }
jps # 角色即目錄 { 4829 DataNode 4974 SecondaryNameNode 4718 NameNode 5087 Jps }
cd /var/sxt/hadoop/local/dfs/ ll # 對於完全分散式的話:第一臺只能看到name,第二臺只能看到data; { total 12 drwx------ 3 root root 4096 Jun 10 05:28 data # drwxr-xr-x 3 root root 4096 Jun 10 05:28 name # 格式化產生的 drwxr-xr-x 3 root root 4096 Jun 10 05:28 namesecondary }
cd /var/sxt/hadoop/local/dfs/name/current/ cat VERSION { #Fri Jun 10 05:18:49 PDT 2022 namespaceID=1178112766 clusterID=CID-3ba8cea9-4994-4ad6-aff6-b159d0f716d1 # 叢集開始時候DttaNonde跟隨namenode(就是兩者的clusterID一樣);格式化時候只會格式化namenode,datanode不會變化;如果重新格式化的話,datanode會找不到namenode,然後自殺,程序退出,找不到主人,自殺; cTime=0 # 如果發現啟動後datanode不見了,第一反應就是這個clusterID不一樣,datanode自殺了; storageType=NAME_NODE # datanode上面的VERSION檔案什麼時候建立的?在namenode格式化後,datanode第一次啟動與namenode互動後產生的。namenode授權給他的。 blockpoolID=BP-2116590704-192.168.9.11-1654863529163 layoutVersion=-57 }
cd /var/sxt/hadoop/local/dfs/data/current/ cat VERSION { #Fri Jun 10 05:28:54 PDT 2022 storageID=DS-6f5b9506-8a9c-4daa-99b9-5acdb21cf00d clusterID=CID-3ba8cea9-4994-4ad6-aff6-b159d0f716d1 # 叢集開始時候DttaNonde跟隨namenode(就是兩者的clusterID一樣) cTime=0 datanodeUuid=fa96bb92-0d4a-488c-87a9-649a1481f49d storageType=DATA_NODE layoutVersion=-55 }
http://node01:50070/ # 瀏覽器;9000是rpc間通訊用的,不是web的,做心跳,傳輸資料; { Overview 'node01:9000' (active) Live Nodes 1 (Decommissioned: 0) Utilities -> 瀏覽檔案系統 -> / # hadoop的根目錄 }
hdfs # 檢視後面可以接收什麼引數 hdfs dfs # 提示 hadoop fs == hdfs dfs hdfs dfs -mkdir -p /user/root # 建立使用者目錄,root;可以在:Utilities -> 瀏覽檔案系統 -> 檢視(/user/root 相當於Linux的home) cd /usr/hadoop/ hdfs dfs -put ./hadoop-2.5.2.tar.gz /user/root # 上傳檔案: 同樣在:Utilities -> 瀏覽檔案系統 -> 檢視 # Permission Owner Group Size(實際大小) Replication Block Size(塊大小) Name # 可以點選檔案,看見檔案被切成兩個塊
for i in `seq 100000`;do echo "hello world $i" >> test.txt;done ll -h ./ hdfs dfs -D dfs.blocksize=1048576 -put ./test.txt /user/root # 1M /var/sxt/hadoop/local/dfs/data/current/BP-2116590704-192.168.9.11-1654863529163/current/finalized ll { -rw-r--r-- 1 root root 134217728 Jun 10 06:00 blk_1073741825 # 壓縮包 資料 -rw-r--r-- 1 root root 1048583 Jun 10 06:00 blk_1073741825_1001.meta # 壓縮包 後設資料 -rw-r--r-- 1 root root 12979764 Jun 10 06:00 blk_1073741826 # 壓縮包 資料 -rw-r--r-- 1 root root 101415 Jun 10 06:00 blk_1073741826_1002.meta # 壓縮包 後設資料 -rw-r--r-- 1 root root 1048576 Jun 10 06:15 blk_1073741827 # test檔案 資料 -rw-r--r-- 1 root root 8199 Jun 10 06:15 blk_1073741827_1003.meta # test檔案 後設資料 -rw-r--r-- 1 root root 740319 Jun 10 06:15 blk_1073741828 # test檔案 資料 -rw-r--r-- 1 root root 5791 Jun 10 06:15 blk_1073741828_1004.meta # test檔案 後設資料 } stop-dfs.sh # 關閉