安装VMware Workstation Pro和3台linux虚拟机

一 centos7最小化安装后不能上网:

首先一定要选网络适配器NAT

选后service network restart

cd /etc/sysconfig/network-scripts
vi /etc/sysconfig/network-scripts/ifcfg-ens33#刚开始没有vim只有vi
BOOTPROTO=static
ONBOOT=yes
DNS1=192.168.11.2
IPADDR=192.168.11.135
NETMASK=255.255.255.0#子网掩码
GATEWAY=192.168.11.2 #默认网关

二 Centos7 下静态IP设置

1、进入network-scripts目录并且查看该目录下存在的ifcfg-xx文件

cd /etc/sysconfig/network-scripts #xx为用户自己目录下的ifcfg-xx 文件

vi /etc/sysconfig/network-scripts/ifcfg-xx

2.修改如下配置内容

BOOTPROTO=static #将dncp改为static
ONBOOT=“yes” #开机时启用本配置
IPADDR=192.168.13.131 #静态ip
GATEWAY=192.168.13.2 #默认网关
NETMASK=255.255.255.0#子网掩码
DNS1=192.168.13.2 #DNS配置

3.修改后ifcfg-ens33 文件内容如下

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=9056d538-a4b5-4f76-b69e-cce7b5cf442b
DEVICE=ens33
ONBOOT=yes
DNS1=192.168.11.2
IPADDR=192.168.11.132
NETMASK=255.255.255.0
GATEWAY=192.168.11.2

4.重启网络服务

service network restart

5.查看改动后的效果,使用ifconfig或者ip addr

三 centOS7永久关闭防火墙(防火墙的基本使用)#开启各节点之间的通信端口

https://blog.csdn.net/ViJayThresh/article/details/81284007

四 各组件安装模式:
master 作为 client 客户端
slave1 作为 hive server 服务器端
slave2 安装 mysql server

大数据分布式基础环境搭建
五 实际操作代码及配置文件:

systemctl stop firewalld.service
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=master
vi /etc/hostname

reboot
不要两句同时运行,容易出错
vi /etc/hosts
192.168.11.135 master master.root
192.168.11.136 slave1 slave1.root
192.168.11.137 slave2 slave2.root

systemctl stop firewalld
#systemctl status firewalld


tzselect 9 1  1
master:
vi /etc/ntp.conf
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10

/bin/systemctl restart ntpd.service

#slave1/2: ntpdate master
ntpdate 192.168.11.135
#date -s 10:00
配置后只能master免密登陆到slave1
而slave1登陆到master

 每个结点分别产生公私**:
ssh-****** -t dsa -P '' -f ~/.ssh/id_dsa(三台机器)
-P(大写)
cd ~/.ssh/

仅 master):(不要同时运行两行)
cat id_dsa.pub >> authorized_keys
scp id_dsa.pub master_dsa.pub


两个slave节点:
scp master:~/.ssh/id_dsa.pub ./master_dsa.pub
cat master_dsa.pub >> authorized_keys
mkdir -p /usr/java
tar -zxvf /opt/soft/jdk-8u171-linux-x64.tar.gz -C /usr/java/
vim /etc/profile
# It's NOT a good idea to change this file unless you know what you
# are doing. It's much better to create a custom.sh shell script in
# /etc/profile.d/ to make custom changes to your environment, as this
# will prevent the need for merging in future updates.


#java

export JAVA_HOME=/usr/java/jdk1.8.0_171

export CLASSPATH=$JAVA_HOME/lib/

export PATH=$PATH:$JAVA_HOME/bin

export PATH JAVA_HOME CLASSPATH





#set zookeeper environment

export ZOOKEEPER_HOME=/usr/zookeeper/zookeeper-3.4.10

PATH=$PATH:$ZOOKEEPER_HOME/bin





#hadoop

export HADOOP_HOME=/usr/hadoop/hadoop-2.7.3

export CLASSPATH=$CLASSPATH:$HADOOP_HOME/lib

export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

#set hbase environment

export HBASE_HOME=/usr/hbase/hbase-1.2.4

export PATH=$PATH:$HBASE_HOME/bin



#set hive

export HIVE_HOME=/usr/hive/apache-hive-2.1.1-bin

export PATH=$PATH:$HIVE_HOME/bin






source /etc/profile









mkdir /usr/hadoop
vim /usr/hadoop/hadoop-2.7.3/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_171



vi /usr/hadoop/hadoop-2.7.3/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>

                <name>fs.default.name</name>

                <value>hdfs://master:9000</value>

</property>

<property>

                <name>hadoop.tmp.dir</name>

                <value>/usr/hadoop/hadoop-2.7.3/hdfs/tmp</value>

<description>A base for other temporary directories.</description>

</property>

<property>

                <name>io.file.buffer.size</name>

                <value>131072</value>

</property>

<property>

                <name>fs.checkpoint.period</name>

                <value>60</value>

</property>

<property>

                <name>fs.checkpoint.size</name>

                <value>67108864</value>
</property>


</configuration>






vim /usr/hadoop/hadoop-2.7.3/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:18040</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:18030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:18088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:18025</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:18141</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- Site specific YARN configuration properties -->
</configuration>




vi /usr/hadoop/hadoop-2.7.3/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/hadoop/hadoop-2.7.3/hdfs/name</value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/hadoop/hadoop-2.7.3/hdfs/data</value>
<final>true</final>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>




cd /usr/hadoop/hadoop-2.7.3/etc/hadoop
 mapred-site.xml
cp  mapred-site.xml.template  mapred-site.xml
vi /usr/hadoop/hadoop-2.7.3/etc/hadoop/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

编写 slaves 与master文件

scp -r /usr/hadoop [email protected]:/usr/
scp -r /usr/hadoop [email protected]:/usr/

master: hadoop namenode -format
master:start-all.sh
slaves:jps
 master/ip:50070(50070 是 hdfs 的 web 管理页面)
#hadoop fs -ls / 
hadoop fs -mkdir /





mkdir -p /usr/zookeeper
tar -zxvf /opt/soft/zookeeper-3.4.10.tar.gz -C /usr/zookeeper/
cd  /usr/zookeeper/zookeeper-3.4.10/conf

vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
dataDir=/usr/zookeeper/zookeeper-3.4.10/zkdata
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.

# the port at which the clients will connect
clientPort=2181
dataLogDir=/usr/zookeeper/zookeeper-3.4.10/zkdatalog
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888
# the maximum number of client connections.
# increase this if you need to handle more clients
不知道为什么不成功

cd /usr/zookeeper/zookeeper-3.4.10/
mkdir zkdata
mkdir zkdatalog
cd zkdata
vim myid
scp -r /usr/zookeeper [email protected]:/usr/
scp -r /usr/zookeeper [email protected]:/usr/
改myid
zkServer.sh start
zkServer.sh status





tar -zxvf /opt/soft/hbase-1.2.4-bin.tar.gz -C /usr/hbase
vi /usr/hbase/hbase-1.2.4/conf/hbase-env.sh
export HBASE_MANAGES_ZK=false
export JAVA_HOME=/usr/java/jdk1.8.0_171
export HBASE_CLASSPATH=/usr/hadoop/hadoop-2.7.3/etc/Hadoop

vi /usr/hbase/hbase-1.2.4/conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://master:6000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master,slave1,slave2</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/zookeeper/zookeeper-3.4.10</value>
</property>
</configuration>

cd /usr/hbase/hbase-1.2.4/conf
vim regionservers
slave1
slave2
conf文件夹
cp /usr/hadoop/hadoop-2.7.3/etc/hadoop/hdfs-site.xml /usr/hbase/hbase-1.2.4/conf
cp /usr/hadoop/hadoop-2.7.3/etc/hadoop/core-site.xml /usr/hbase/hbase-1.2.4/conf

scp -r /usr/hbase [email protected]:/usr/
scp -r /usr/hbase [email protected]:/usr/
salve2:
systemctl daemon-reload
#开启服务:systemctl start mysqld
开机自启:systemctl enable mysqld
grep ''temporary password''  /var/log/mysqld.log
mysql -uroot  -p密码
set global validate_password_policy=0
设置密码长度:set global validate_password_length=4;


修改本地密码:alter user 'root'@'localhost' identified by '123456';
退出:\q
mysql -uroot -p123456
create user 'root'@'%' identified by '123456';
grant all privileges on *.* to 'root'@'%' with grant option;
flush privileges;


/*
master 中操作如下:
cd /opt/soft
mkdir -p /usr/hive
tar -zxvf /opt/soft/apache-hive-2.1.1-bin.tar.gz -C /usr/hive/
 slave1 : mkdir /usr/hive,然后 master 中将安装包远程复制到slave1。
scp -r /usr/hive/apache-hive-2.1.1-bin [email protected]:/usr/hive/
*/
slave2 中进行如下操作:
scp /usr/lib/mysql-connector-java-5.1.5-bin.jar [email protected]:/usr/hive/apache-hive-2.1.1-bin/lib



master:
vi /usr/hive/apache-hive-2.1.1-bin/conf/hive-site.xml
<configuration>
<!-- Hive The resulting metadata repository location-->
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive_remote/warehouse</value>
</property>
<!--- Connect Hive with the local service, which defaults to true-->
<property>
<name>hive.metastore.local</name>
<value>false</value>
</property>
<!-- connect to server-->
<property>
<name>hive.metastore.uris</name>
<value>thrift://slave1:9083</value>
</property>
</configuration>



slave1: vi /usr/hive/apache-hive-2.1.1-bin/conf/hive-site.xml
<configuration>
<!-- Hive The resulting metadata repository location-->
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive_remote/warehouse</value>
</property>
<!-- The URL address of the database connection JDBC-->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://slave2:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<!-- The database connection driver is the MySQL driver-->
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<!-- MySQL database user name-->
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<!-- database password-->
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
</property>
</configuration>
slave1:
vi /usr/hive/apache-hive-2.1.1-bin/conf/hive-env.sh
HADOOP_HOME=/usr/hadoop/hadoop-2.7.3

master:
cp /usr/hive/apache-hive-2.1.1-bin/lib/jline-2.12.jar /usr/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/
在slave1上启动hive service  
 hive --service metastore
master:
hive
show databases;
source /etc/profile
start-hbase.sh
hbase shell

相关文章:

  • 2021-12-04
  • 2021-04-26
  • 2021-12-13
  • 2021-09-21
  • 2021-05-20
  • 2021-09-29
  • 2021-12-13
猜你喜欢
  • 2021-06-18
  • 2022-12-23
  • 2021-11-30
  • 2021-10-30
  • 2021-08-29
  • 2022-03-01
  • 2022-01-23
相关资源
相似解决方案