基本语法
hadoop fs + 具体命令 或者 hdfs dfs + 具体命令
这两条基本语法底层是一样的,只是名字不一样罢了
命令集合
常用命令实操
- 启动Hadoop集群
[[email protected] hadoop-2.7.2]$ sbin/start-dfs.sh
[[email protected] hadoop-2.7.2]$ sbin/start-yarn.sh- -help:输出这个命令参数
[[email protected] hadoop-2.7.2]$ hadoop fs -help rm- -ls:显示目录信息
[[email protected] hadoop-2.7.2]$ hadoop fs -ls /- -mkdir:在HDFS上创建目录
[[email protected] hadoop-2.7.2]$ hadoop fs -mkdir -p /redhat/hadoop- -moveFromLocal:从本地剪切粘贴到HDFS
[[email protected] hadoop-2.7.2]$ touch Hadoop.txt
[[email protected] hadoop-2.7.2]$ Hadoop fs -moveFromLocal ./Hadoop.txt /redhat/hadoop- -cat:显示文件内容
[[email protected] hadoop-2.7.2]$ Hadoop fs -cat /redhat/hadoop/hadoop.txt- -appendToFile:追加一个文件到已存在的文件末尾
[[email protected] hadoop-2.7.2]$ touch zhixiong.txt
[[email protected] hadoop-2.7.2]$ vim zhixiong.txt
输入 zhou zhi xiong
[[email protected] hadoop-2.7.2]$ hadoop fs -appendToFile zhixiong.txt /redhat/hadoop/hadoop.txt- -chgrp:、-chmod、-chown:和linux系统的使用用法一样、修改文件所属权限
[[email protected] hadoop-2.7.2]$ hadoop fs -chmod 666 /redhat/hadoop/hadoop.txt
[[email protected] hadoop-2.7.2]$ hadoop fs -chown ubantu:ubantu /redhat/hadoop/hadoop.txt- -copyFromLocal:从文件系统中拷贝文件到HDFS路径去
[[email protected] hadoop-2.7.2]$hadoop fs -copyFromLocal README.txt /redhat/hadoop/- -copyToLocal:从HDFS拷贝到本地
[[email protected] hadoop-2.7.2]$hadoop fs -copyToLocal /redhat/hadoop/hadoop.txt ./- -cp:从HDFS的一个路径拷贝到HDFS的另一个路径
[[email protected] hadoop-2.7.2]$hadoop fs -cp /redhat/hadoop/hadoop.txt /redhat/linux/linux.txt- -mv:在HDFS目录中移动文件
[[email protected] hadoop-2.7.2]$hadoop fs -mv /redhat/hadoop/hadoop.txt /redhat/linux- -get:等同于copyToLocal
[[email protected] hadoop-2.7.2]$hadoop fs -get /redhat/hadoop/hadoop.txt ./- -getmerge:合并下载多个文件,比如HDFS的目录/usr/redhat/test下有多个文件:test.1.test.2,test.3,…
[[email protected] hadoop-2.7.2]$hadoop fs -getmerge /usr/redhat/test/* ./test.txt- -put:等同于copyFromLocal
[[email protected] hadoop-2.7.2]$ hadoop fs -put ./test.txt /usr/redhat/test/- -tail:显示一个文件的末尾
[[email protected] hadoop-2.7.2]$ hadoop fs -tail /usr/redhat/test.txt- -rm:删除文件或者文件夹
[[email protected] hadoop-2.7.2]$ hadoop fs -rm /usr/redhat/test/test.txt
[[email protected] hadoop-2.7.2]$hadoop fs -rm -r -skipTrash /redhat/linux/- -rmdir:删除空目录
[[email protected] hadoop-2.7.2]hadoop fs -rmdir /test- -du:统计文件夹的大小信息
[[email protected] hadoop-2.7.2]$hadoop fs -du -s -h /usr/redhat/test
2.7 K /usr/redhat/test/- -setrep:设置HDFS中文件的副本数量
[[email protected] hadoop-2.7.2]$hadoop fs -setrep 10 /usr/redhat/test.txt
这里设置的副本数只是记录在NameNode的元数据中,是否真的会有这么多副本,还得看DataNode的数量。如果目前只有3台设备,那么最多也就3个副本,只有当节点数增加到10台时,副本数才能达到10。