Pycharm(linux)+Hadoop+Spark(环境搭建)
Pycharm(linux)+Hadoop+Spark
2021-05-03 by pt
Pycharm下载:JetBrain官网
进入镜像源,配置aliyun镜像。
桌面进入终端:
sudo apt-get update
sudo apt-get install vim ## 下载vim 编译器
sudo apt-get install openssh-server ##安装ssh远程控制,客户服务器。
修改主机名,修改ip映射;
sudo vim /etc/hostname
sudo vim /etc/hosts
修改其远程免密登录:
sudo vim /etc/ssh/sshd_config
sudo service ssh restart
免密登录:
ssh-keygen ##一路回车
[root@master root]cd ~/.ssh ## (切换root)/root/.ssh
[root@master .ssh]ssh-copy-id -i root@master
yes
hadoop
[root@master .ssh]# ssh master ##没有提示输入密码则成功
#cd ~/.ssh/ # 若没有该目录,请先执行一次ssh localhost
#ssh-keygen -t rsa # 会有提示,都按回车就可以
#cat ./id_rsa.pub >> ./authorized_keys # 加入授权
如果xshel 客户端|出现以下情况 ?
![img](file:///C:UsersLenovoAppDataLocalTempksohtml17432wps1.jpg)
Reboot 可解决这个情况!!!!
创建应用apps目录:
cd usr/local
mkdir apps
sudo chown -R hadoop:hadoop /usr/local/apps/
Java的安装和环境配置:
-
安装java:
java-version ##查看当前系统中存在的java ##卸载其openjdk cd /usr/local/apps/ tar -zvxf /opt/jdk-8u45-linux-x64.tar.gz -C ./ mv jdk1.8.0_45/ java
-
java环境配置:
vim ~/.bashrc export JAVA_HOME=/usr/local/apps/java export PATH=$JAVA_HOME/bin:$PATH source ~/.bashrc
Hadoop伪分布式搭建:
-
hadoop安装
cd /usr/local/apps tar -zvxf /opt/hadoop-2.7.1.tar.gz -C ./ mv hadoop-2.7.1 hadoop
-
hadoop环境配置
vim ~/.bashrc #set hadoop environment export HADOOP_HOME=/usr/local/apps/hadoop export PATH=${PATH}:${HADOOP_HOME}/bin export PATH=${PATH}:${HADOOP_HOME}/sbin ##便于任何路径启动dfs集群 source ~/.bashrc
-
hadoop伪分布式文件配置
第1个配置:hadoop-env.sh
cd /usr/local/apps/hadoop cd etc/hadoop/ vim hadoop-env.sh #第26行 export JAVA_HOME=/usr/local/apps/java
第2个配置:core-site.xml
vim core-site.xml <!-- 制定HDFS的老大(NameNode)的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> # <!-- 指定hadoop运行时产生文件的存储目录 --> # <property> # <name>hadoop.tmp.dir</name> # <value>/data/hadoop/tmp</value> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/apps/hadoop/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/apps/hadoop/tmp/dfs/data</value> </property> ###创建运行文件存储目录 <!-- 指定hadoop运行时产生文件的存储目录 --> cd /usr/local/apps/ mkdir -p /hadoop/tmp/dfs # 创建文件夹 #如果报错 #mkdir /data/hadoop/tmp ############mkdir: cannot create directory ‘/data/hadoop/tmp’: File exists rm -rf /data/hadoop/tmp/* cd hadoop/tmp/dfs mkdir data mkdir name
第3个配置hdfs-site.xml
<!-- 指定HDFS副本的数量 --> <property> <name>dfs.replication</name> <value>1</value> </property>
第4个配置slaves
vim slaves #localhost master hadoop version ##查看版本
格式化:
cd /usr/local/apps/hadoop #hadoop version ##查看版本 HS_12@master:/usr/local/apps/hadoop/bin$ hadoop version #Hadoop 2.7.1 #Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a #Compiled by jenkins on 2015-06-29T06:04Z #Compiled with protoc 2.5.0 #From source with checksum fc0a1a23fc1868e4d5ee7fa2b28a58a ./bin/hdfs namenode -format
启动伪分布式群集:
cd /usr/local/apps/hadoop ./sbin/start-dfs.sh
创建hdfs的用户目录:
cd /usr/local/apps/hadoop ./bin/hdfs dfs -mkdir -p /user/hadoop ./bin/hdfs dfs -ls /user/hadoop
spark安装:
-
spark安装:
cd /usr/local/apps tar -zxvf /opt/spark-2.1.0-bin-without-hadoop.tgz -C ./ mv spark-2.1.0-bin-hadoop2.7/ spark
-
spark环境配置:
vim ~/.bashrc #set spark environment export SPARK_HOME=/usr/local/apps/spark export PATH=${PATH}:${SPARK_HOME}/bin export HADOOP_HOME=/usr/local/apps/hadoop export SPARK_HOME=/usr/local/apps/spark ##export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$PYTHONPATH ##查看版本 export PYSPARK_PYTHON=python3 export PATH=$HADOOP_HOME/bin:$SPARK_HOME/bin:$PATH source ~/.bashrc
-
spark文件配置
[root@master conf]# pwd
/usr/local/apps/spark/conf
[root@master conf]# cp spark-env.sh.template spark-env.sh
[root@master conf]# vim spark-env.sh
export SPARK_DIST_CLASSPATH=$(/usr/local/apps/hadoop/bin/hadoop classpath)
####有了上面的配置信息以后,Spark就可以把数据存储到Hadoop分布式文件系统HDFS中,也可以从HDFS中读取数据。如果没有配置上面信息,Spark就只能读写本地数据,无法读写HDFS数据。####有了上面的配置信息以后,Spark就可以把数据存储到Hadoop分布式文件系统HDFS中,也可以从HDFS中读取数据。如果没有配置上面信息,Spark就只能读写本地数据,无法读写HDFS数据。
export JAVA_HOME=/usr/local/apps/java
export HADOOP_HOME=/usr/local/apps/hadoop
export HADOOP_CONF_DIR=/usr/local/apps/hadoop/etc/hadoop
#export SCALA_HOME=/usr/local/apps/scala
export SPARK_MASTER_IP=master
export SPARK_WORKER_MEMORY=512M
修改服务机器,单机部署,所以名字改成自己的;
cd /usr/local/apps/spark/conf
[root@master conf]# cp slaves.template slaves
[root@master conf]# vi slaves
#删除localhost
master
[root@master sbin]# pwd
/usr/local/apps/spark/sbin
[root@master sbin]# ./start-all.sh
运行检测spark是否启动;
[root@master sbin]# jps
87571 DataNode
98067 Master
98243 Jps
94578 QuorumPeerMain
95554 HRegionServer
87765 SecondaryNameNode
87940 ResourceManager
87415 NameNode
98172 Worker
88063 NodeManager
95407 HMaster
#成功
#######
cd /usr/local/apps/spark
bin/run-example SparkPi 2>&1 | grep "Pi is"
#################
pycharm(linux)环境安装,启动:
pycharm的环境配置:
###set##pycharm
export PyCharm_HOME=/usr/local/apps/pycharm
export PATH=${PyCharm_HOME}/bin:$PATH
alias python="/usr/bin/python3.5.2"
#export PATH=$PYTHONPATH:$SPARK_HOME/python/:$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$PYTHONSPARK
###################################################################################
#export PATH=$HADOOP_HOME/bin:$SPARK_HOME/bin:$PATH
修改虚机中python版本为python3.5
#Ubuntu16.04切换python3和python2
#切换Python3为默认版本:
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 100
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 150
#切换Python2为默认版本:
sudo update-alternatives --config python
Rdd-Wordcount编程测试
1.查看词频统计文本
pwd #当前路径
ls
2.进入Pycharm的bin启动目录
pwd #查看当前路径
cd /usr/lcoal/apps/pycharm/bin/
./pycharm.sh
3.输入 命令,启动pycharm
4.在pycharm上配置Spark环境:
第一步:
点击pycharm右上角的“Add Configuration”或通过菜单栏“run”下拉点击选择“Add Configuration”,在新弹出的窗口左上角点击”+”号(“+ Python”),命名为Spark。勾选右边的“shared”选项。
接着在“Environment variables”一栏点击右边按钮进行环境变量配置。
第二步:开始配置spark和pyspark环境变量,命名为SPARK_HOME和SPARKPYTHON,值分别为Spark安装的路径以及pyspark的路径
点击OK,完成环境配置。
第三步:导入相关的库(pyspark模块)
点击菜单栏”File”–>”Setting”–>”Project Structure”中点击右上角”Add Content Root”
进入spark安装目录下的python中导入两个压缩包
点击OK,完成配置。
5.使用Pycharm运行pyspark程序:
创建wordcount.py程序文件输入以下代码:
#-*- coding:utf8-*-
import os
os.environ["JAVA_HOME"] = "/usr/local/apps/java"
from pyspark import SparkConf, SparkContext
conf = SparkConf().setAppName("WordCount").setMaster("local")
sc = SparkContext(conf=conf)
#inputFile = "hdfs://master:9000/user/hadoop/input/wordtest.txt" ##读取hdfs文件
inputFile = "file:///root/wordtest.txt" # 读取本地文件
textFile = sc.textFile(inputFile)
wordCount = textFile.flatMap(lambda line : line.split(" ")).map(lambda word : (word, 1)).reduceByKey(lambda a, b : a + b)
wordCount.foreach(print)
6.Spark计算结果:
参考博客
[注]:如果pycharm运行遇到这个问题:
Python in worker has different version 2.7 than that in driver 3.5,
PySpark cannot run with different minor versions.
Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set
添加运行环境:
import os
##########
os.environ["PYSPARK_PYTHON"] = "/usr/bin/python3"