當我們的hadoop
叢集執行了一段時間之後,原有的資料節點的容量已經不能滿足我們的儲存了,這個時候就需要往叢集中增加新的資料節點。此時我們就需要動態的對hdfs叢集進行擴容操作(節點服役
)。
在hdfs
叢集中是存在黑名單
和白名單
的。
黑名單:
該檔案包含不允許
連線到namenode
的主機列表。必須指定檔案的完整路徑名
。如果該值為空,則表示不排除任何主機。
白名單:
該檔案包含允許
連線到namenode
的主機列表。必須指定檔案的完整路徑名
。如果該值為空,則允許所有主機
黑白名單設定
vim hdfs-site.xml
<!-- 設定黑名單,使用黑名單可以下線叢集 -->
<property>
<name>dfs.hosts.exclude</name>
<value>/opt/bigdata/hadoop-3.3.4/etc/hadoop/blacklist.hosts</value>
</property>
<!-- 設定白名單,只有白名單中的節點才可以存取namenode -->
<property>
<name>dfs.hosts</name>
<value>/opt/bigdata/hadoop-3.3.4/etc/hadoop/whitelist.hosts</value>
</property>
注意:
第一次設定黑白名單時,需要重啟叢集才可以生效,之後修改了黑白名單檔案,只需要執行 hdfs dfsadmin -refreshNodes
命令即可。
[root@appbasic ~]# vim /etc/hostname
[root@appbasic ~]# cat /etc/hostname
hadoop04
[root@appbasic ~]# vim /etc/hosts
[root@appbasic ~]# cat /etc/hosts
192.168.121.140 hadoop01
192.168.121.141 hadoop02
192.168.121.142 hadoop03
192.168.121.143 hadoop04
[root@appbasic ~]#
hadoop
叢集中的各個機器之間的時間最好都保持一致
[root@hadoop04 ~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
[root@hadoop04 ~]# yum install ntp
已載入外掛:fastestmirror
Loading mirror speeds from cached hostfile
base | 3.6 kB 00:00:00
extras | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/2): extras/7/aarch64/primary_db | 252 kB 00:00:00
(2/2): updates/7/aarch64/primary_db | 3.5 MB 00:00:03
軟體包 ntp-4.2.6p5-29.el7.centos.2.aarch64 已安裝並且是最新版本
無須任何處理
[root@hadoop04 ~]# systemctl enable ntpd
[root@hadoop04 ~]# service ntpd restart
Redirecting to /bin/systemctl restart ntpd.service
[root@hadoop04 ~]# ntpdate asia.pool.ntp.org
29 Mar 21:42:52 ntpdate[1697]: the NTP socket is in use, exiting
[root@hadoop04 ~]# /sbin/hwclock --systohc
[root@hadoop04 ~]# timedatectl
Local time: 三 2023-03-29 21:43:03 CST
Universal time: 三 2023-03-29 13:43:03 UTC
RTC time: 三 2023-03-29 13:43:03
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: n/a
[root@hadoop04 ~]# timedatectl set-ntp true
[root@hadoop04 ~]#
[root@hadoop04 ~]# systemctl stop firewalld
systemctl stop firewalld
[root@hadoop04 ~]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@hadoop04 ~]#
[root@hadoop04 ~]# useradd hadoopdeploy
[root@hadoop04 ~]# passwd hadoopdeploy
更改使用者 hadoopdeploy 的密碼 。
新的 密碼:
無效的密碼: 密碼包含使用者名稱在某些地方
重新輸入新的 密碼:
passwd:所有的身份驗證令牌已經成功更新。
[root@hadoop04 ~]# vim /etc/sudoers
[root@hadoop04 ~]# cat /etc/sudoers | grep hadoopdeploy -C 3
## Same thing without a password
# %wheel ALL=(ALL) NOPASSWD: ALL
hadoopdeploy ALL=(ALL) NOPASSWD: ALL
## Allows members of the users group to mount and unmount the
## cdrom as root
[root@hadoop04 ~]#
[root@hadoop04 ~]# scp /etc/hosts root@hadoop01:/etc/hosts
[root@hadoop04 ~]# scp /etc/hosts root@hadoop02:/etc/hosts
[root@hadoop04 ~]# scp /etc/hosts root@hadoop03:/etc/hosts
此處設定 namenode(hadoop01)
與hadoop04
之間的免密登入。
[hadoopdeploy@hadoop01 ~]$ ssh-copy-id hadoop04
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoopdeploy/.ssh/id_rsa.pub"
The authenticity of host 'hadoop04 (192.168.121.143)' can't be established.
ECDSA key fingerprint is SHA256:4GL0zHVCdSl3czA0wqcuLT60lUljyEq3DqwPFxNwYsE.
ECDSA key fingerprint is MD5:3e:42:a6:50:0d:fb:f0:41:a8:0d:fb:cc:fd:20:2c:c8.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hadoopdeploy@hadoop04's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'hadoop04'"
and check to make sure that only the key(s) you wanted were added.
[hadoopdeploy@hadoop01 ~]$
[root@hadoop04 ~]# sudo mkdir /opt/bigdata
mkdir: cannot create directory ‘/opt/bigdata’: No such file or directory
[root@hadoop04 ~]# sudo mkdir -p /opt/bigdata
[root@hadoop04 ~]# sudo chown -R hadoopdeploy:hadoopdeploy /opt/bigdata/
[root@hadoop04 ~]# su - hadoopdeploy
Last login: Wed Mar 29 22:19:54 CST 2023 on pts/0
[hadoopdeploy@hadoop04 ~]$ scp -r hadoopdeploy@hadoop01:/opt/bigdata/hadoop-3.3.4/ /opt/bigdata/
[hadoopdeploy@hadoop04 hadoop]$ rm -rvf /opt/bigdata/hadoop-3.3.4/data/* rm -rvf /opt/bigdata/hadoop-3.3.4/logs/*
注意
目錄的建立使用者、執行scp命令的使用者
注意:
如果hadoop-3.3.4
目錄下存在我們之前設定的資料目錄,則需要刪除,否則啟動這個節點的時候會有問題。紀錄檔目錄也相應的刪除
設定這個檔案是為了方便叢集的一鍵啟動。
[hadoopdeploy@hadoop04 hadoop]$ vim /opt/bigdata/hadoop-3.3.4/etc/hadoop/workers
[hadoopdeploy@hadoop04 hadoop]$ cat /opt/bigdata/hadoop-3.3.4/etc/hadoop/workers
hadoop01
hadoop02
hadoop03
hadoop04
[hadoopdeploy@hadoop04 hadoop]$
注意:
將這個workers
檔案分發到叢集的各個機器上。
[hadoopdeploy@hadoop04 hadoop]$ scp /opt/bigdata/hadoop-3.3.4/etc/hadoop/workers hadoopdeploy@hadoop01:/opt/bigdata/hadoop-3.3.4/etc/hadoop/workers
[hadoopdeploy@hadoop04 hadoop]$ scp /opt/bigdata/hadoop-3.3.4/etc/hadoop/workers hadoopdeploy@hadoop02:/opt/bigdata/hadoop-3.3.4/etc/hadoop/workers
[hadoopdeploy@hadoop04 hadoop]$ scp /opt/bigdata/hadoop-3.3.4/etc/hadoop/workers hadoopdeploy@hadoop03:/opt/bigdata/hadoop-3.3.4/etc/hadoop/workers
[hadoopdeploy@hadoop04 hadoop]$ source /etc/profile
[hadoopdeploy@hadoop04 logs]$ hdfs --daemon start datanode
[hadoopdeploy@hadoop04 logs]$ jps
2278 DataNode
2349 Jps
[hadoopdeploy@hadoop04 logs]$
注意:
如果我們不想讓任意一臺機器隨便就加入到了我們的叢集中,那麼我們通過白名單
來控制。
新加入的節點磁碟空間比較大,這個時候我們就可以將別的節點的資料均衡到這個節點中來。
# 設定資料傳輸頻寬
[hadoopdeploy@hadoop04 logs]$ hdfs dfsadmin -setBalancerBandwidth 10485760
Balancer bandwidth is set to 10485760
# 執行banalce
[hadoopdeploy@hadoop04 logs]$ hdfs balancer -policy datanode -threshold 5
注意:
只需要在NameNode
或者ResourceManager
上執行即可。
vim hdfs-site.xml
<!-- 設定黑名單,使用黑名單可以下線叢集 -->
<property>
<name>dfs.hosts.exclude</name>
<value>/opt/bigdata/hadoop-3.3.4/etc/hadoop/blacklist.hosts</value>
</property>
注意:
這個組態檔如果之間沒有設定過,則需要重啟叢集才生效,如果之前設定過,則在NameNode
或ResourceManager
節點上執行hdfs dfsadmin -refreshNodes
命令即可。
編輯blacklist.hosts
檔案,加入需要下線的節點。(NameNode或ResourceManager上操作 )
[hadoopdeploy@hadoop01 ~]$ cd /opt/bigdata/hadoop-3.3.4/etc/hadoop/
[hadoopdeploy@hadoop01 hadoop]$ vim blacklist.hosts
[hadoopdeploy@hadoop01 hadoop]$ cat blacklist.hosts
hadoop04
[hadoopdeploy@hadoop01 hadoop]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful
[hadoopdeploy@hadoop01 hadoop]$
在hdfs叢集上可以看到hadoop04已經下線了
.
節點移除後,可以考慮再次均衡叢集中的資料。
注意:
此時可以看到我們的叢集中有4臺機器,假設我們叢集的副本設定為4,那麼此時是不可下線節點的,需要修改叢集的副本<4。
[hadoopdeploy@hadoop04 logs]$ hdfs --daemon stop datanode
[hadoopdeploy@hadoop04 logs]$
本文來自部落格園,作者:huan1993,轉載請註明原文連結:https://www.cnblogs.com/huan1993/p/17286012.html