Comments (11)
这是通过spark 3.2.4 提交任务报的错:
23/05/18 17:43:00 INFO SparkEntries: Created Spark session.
23/05/18 17:43:00 INFO BlockManagerMasterEndpoint: Registering block manager localhost:33381 with 912.3 MiB RAM, BlockManagerId(2, localhost, 33381, None)
23/05/18 17:43:25 ERROR PythonInterpreter: Process has died with 1
23/05/18 17:43:25 ERROR PythonInterpreter: Traceback (most recent call last):
File "/tmp/8001765088071541442", line 722, in
sys.exit(main())
File "/tmp/8001765088071541442", line 570, in main
exec('from pyspark.sql import HiveContext', global_dict)
File "", line 1, in
File "/home/cocdkl/soft/spark-3.2.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/init.py", line 53, in
File "/home/cocdkl/soft/spark-3.2.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 34, in
File "/home/cocdkl/soft/spark-3.2.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/java_gateway.py", line 31, in
File "/home/cocdkl/soft/spark-3.2.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/find_spark_home.py", line 68
print("Could not find valid SPARK_HOME while searching {0}".format(paths), file=sys.stderr)
^
SyntaxError: invalid syntax
奇怪的是在yarn ,spark的集群是启动的,但是一提交具体代码作业,就会报错。
希望得到解决,谢谢
from incubator-livy.
from incubator-livy.
感谢您抽出时间帮我解答问题
我在 livy-env.sh中进行了如下配置
SPARK_HOME=/home/cocdkl/soft/spark-3.2.4-bin-hadoop2.7
SPARK_CONF_DIR=/home/cocdkl/soft/spark-3.2.4-bin-hadoop2.7/conf
并且通过python -V 得到的版本为 2.7.16
我尝试过通过spar直接执行 pyspark命令,可以正常操作。
我写改过 /pyspark.zip/pyspark/find_spark_home.py 这个脚本,将这个脚本的返回值固化为我本地的spark_home目录,但是还是会报其他的错误。
from incubator-livy.
这是我pom文件修改的地方
<scala.binary.version>2.12</scala.binary.version>
<scala.version>2.12.15</scala.version>
<spark.version>3.2.4</spark.version>
net.alchim31.maven
scala-maven-plugin
4.8.1
都是对相应的版本信息进行修改
from incubator-livy.
from incubator-livy.
再次感谢您回答我的问题
以下是我环境变量的配置
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/sbin:/usr/sbin
export JAVA_HOME=/home/cocdkl/soft/jdk1.8.0_371
export HADOOP_HOME=/home/cocdkl/soft/hadoop-3.3.0
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HIVE_HOME=/home/cocdkl/soft/apache-hive-3.1.2-bin
export HIVE_CONF_DIR=/home/cocdkl/soft/apache-hive-3.1.2-bin/conf
#export HIVE_HOME=/home/cocdkl/soft/apache-hive-3.1.3-bin
#export HIVE_CONF_DIR=/home/cocdkl/soft/apache-hive-3.1.3-bin/conf
#export SPARK_HOME=/home/cocdkl/soft/spark-3.0.3-bin-hadoop3.2
#export SPARK_HOME=/home/cocdkl/soft/spark-3.3.0-bin-hadoop3
#export SPARK_HOME=/home/cocdkl/soft/spark-2.3.0-bin-hadoop2.7
export SPARK_HOME=/home/cocdkl/soft/spark-3.2.4-bin-hadoop2.7
#export SPARK_HOME=/home/cocdkl/soft/spark-2.4.8-bin-without-hadoop
export SCALA_HOME=/home/cocdkl/soft/scala-2.13.8
export HBASE_HOME=/home/cocdkl/soft/hbase-2.4.15
export ZOOKEEPER_HOME=/home/cocdkl/soft/apache-zookeeper-3.7.1-bin
export PATH=$JAVA_HOME/bin:$PATH
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export PATH=$HIVE_HOME/bin:$PATH
export PATH=$SPARK_HOME/bin:$PATH
export PATH=$SCALA_HOME/bin:$PATH
export PATH=$HBASE_HOME/bin:$PATH
export PATH=$ZOOKEEPER_HOME/bin:$PATH
export CLASSPATH=''
export HADOOP_CLASSPATH=hadoop classpath
#export HADOOP_CLASSPATH=/home/cocdkl/soft/HADOOP_CLASSPATH/*
已经设置了SPARK_HOME,但是还是执行find_spark_home.py,这也是我奇怪的地方。
from incubator-livy.
或者您有编译好的可以支持spark3.2的livy吗,可以提供一下或者更新一下官网下载包吗?我看官网还是0.7.1。
from incubator-livy.
from incubator-livy.
首先感谢您的回答。
然后是通过源码查看还是在哪里配置debug选项?
最后,您说没有修改代码,那您的pom有修改吗?我看默认的pom是配置的是spark2.4.5和Scala2.11。您能否分享一下你那边支持spark3.2.0的pom文件?
from incubator-livy.
from incubator-livy.
抱歉,我还是没有解决,您是否能提供一下支持spark3.2.0的pom文件呢?
from incubator-livy.
Related Issues (16)
- New livy release request HOT 2
- livy0.7.1 Request failed HOT 1
- Error[Failed to launch livy session, session status is dead] on connecting to spark through livy using R
- Build with scala 2.12.15 is failing, spark3.3.1, hadoop:3.3.4, hive:3.1..3 HOT 2
- Dockerfile build fails on `livy-server` step HOT 3
- The SparkStreaming operator fails to execute
- Escape backtick from spark-submit arguments HOT 3
- Build fails when using -Pscala-2.12 HOT 1
- Does Livy support Flink
- livy:What are the plans for the later stage
- 【livy-8.0-2.12 spark3.2.1】kerberos认证问题
- can user auto run the init script when the livy session is created?
- {”msg”:“Rejected, Reason: Blacklisted configuration values in session config: spark.submit.deployMode”}
- livy 传参问题 HOT 1
- Can Livy not rely on Spark
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from incubator-livy.