spark 支持 shell 操作

shell 主要用于调试,所以简单介绍用法即可

 

支持多种语言的 shell

包括 scala shell、python shell、R shell、SQL shell 等

spark-shell 用于在 scala 的 shell 模式下操作 spark

pyspark 用于在 python 的 shell 模式下操作 spark

spark-sql 用于在 spark-sql 模式下运行 sql,后续会讲 sparkSQL

 

支持 3 种模式的 shell

local 模式、 standalone 模式、yarn模式

不同的模式需要指定  master

 

python 模式的 shell 命令

master 参数指定了运行模式

[root@hadoop10 spark]# bin/pyspark --help
Usage: ./bin/pyspark [options]

Options:
  --master MASTER_URL         spark://host:port, mesos://host:port, yarn,               # 设定 master,即在哪里运行 spark,
                                                                                        # mesos://host:port一般不用;yarn需要把spark部署到yarn上
                              k8s://https://host:port, or local (Default: local[*]).    # local 本地模式,local 表示单线程,local[num]表示num个进程,
                                                                                        # local[*]表示服务器cpu是几核就是几个进程
  --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client") or
                              on one of the worker machines inside the cluster ("cluster")
                              (Default: client).
  --class CLASS_NAME          Your application's main class (for Java / Scala apps).    # 要执行的 class 类名
  --name NAME                 A name of your application.
  --jars JARS                 Comma-separated list of jars to include on the driver
                              and executor classpaths.
  --packages                  Comma-separated list of maven coordinates of jars to include  # 逗号隔开的 maven 列表,给 当前会话 添加依赖
                              on the driver and executor classpaths. Will search the local
                              maven repo, then maven central and any additional remote
                              repositories given by --repositories. The format for the
                              coordinates should be groupId:artifactId:version.
  --exclude-packages          Comma-separated list of groupId:artifactId, to exclude while
                              resolving the dependencies provided in --packages to avoid
                              dependency conflicts.
  --repositories              Comma-separated list of additional remote repositories to
                              search for the maven coordinates given with --packages.
  --py-files PY_FILES         Comma-separated list of .zip, .egg, or .py files to place  # 逗号隔开的 zip.文件列表,替代 PYTHONPATH 的作用,
                              on the PYTHONPATH for Python apps.    # 也就是说如果不设置 PYTHONPATH,就需要这个参数,才能导入 文件中的模块
  --files FILES               Comma-separated list of files to be placed in the working
                              directory of each executor. File paths of these files
                              in executors can be accessed via SparkFiles.get(fileName).

  --conf PROP=VALUE           Arbitrary Spark configuration property.
  --properties-file FILE      Path to a file from which to load extra properties. If not
                              specified, this will look for conf/spark-defaults.conf.

  --driver-memory MEM         Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
  --driver-java-options       Extra Java options to pass to the driver.
  --driver-library-path       Extra library path entries to pass to the driver.
  --driver-class-path         Extra class path entries to pass to the driver. Note that
                              jars added with --jars are automatically included in the
                              classpath.

  --executor-memory MEM       Memory per executor (e.g. 1000M, 2G) (Default: 1G).

  --proxy-user NAME           User to impersonate when submitting the application.
                              This argument does not work with --principal / --keytab.

  --help, -h                  Show this help message and exit.
  --verbose, -v               Print additional debug output.
  --version,                  Print the version of current Spark.

 Cluster deploy mode only:
  --driver-cores NUM          Number of cores used by the driver, only in cluster mode
                              (Default: 1).

 Spark standalone or Mesos with cluster deploy mode only:
  --supervise                 If given, restarts the driver on failure.
  --kill SUBMISSION_ID        If given, kills the driver specified.
  --status SUBMISSION_ID      If given, requests the status of the driver specified.

 Spark standalone and Mesos only:
  --total-executor-cores NUM  Total cores for all executors.

 Spark standalone and YARN only:
  --executor-cores NUM        Number of cores per executor. (Default: 1 in YARN mode,
                              or all available cores on the worker in standalone mode)

 YARN-only:
  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
  --num-executors NUM         Number of executors to launch (Default: 2).
                              If dynamic allocation is enabled, the initial number of
                              executors will be at least NUM.
  --archives ARCHIVES         Comma separated list of archives to be extracted into the
                              working directory of each executor.
  --principal PRINCIPAL       Principal to be used to login to KDC, while running on
                              secure HDFS.
  --keytab KEYTAB             The full path to the file that contains the keytab for the
                              principal specified above. This keytab will be copied to
                              the node running the Application Master via the Secure
                              Distributed Cache, for renewing the login tickets and the
                              delegation tokens periodically.

 

进入 python shell 模式

[root@hadoop10 spark]# bin/pyspark 
Python 2.7.12 (default, Oct  2 2019, 19:43:15) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
19/10/09 18:10:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.4
      /_/

Using Python version 2.7.12 (default, Oct  2 2019 19:43:15)
SparkSession available as 'spark'.  # 自带 spark

 

shell 模式可以通过 http://192.168.10.10:4040 查看任务

 

shell 操作语法与 脚本 相同,示例如下

>>> distFile = sc.textFile('README.md')
>>> distFile.map(lambda x: len(x)).reduce(lambda a, b: a + b)
3847                                                                            
>>> distFile.count()
105

 

spark-submit 命令

spark-submit 命令 用于提交 spark 任务,执行 脚本文件,后面会以 python 为例进行讲解。