快速搭建hadoop KMS开发集成环境

概要

Hadoop KMS是一个基于 HadoopKeyProvider API的用密码写的 key 管理serverClient是一个KeyProvider的实现,使用KMS HTTP REST APIKMS交互。
KMS和它的客户端内置安全和它们支持HTTP SPNEGO Kerberos 身份验证和HTTPS安全转换.
KMS是一个Java Web应用程序,运行在与Hadoop发行版绑定在一起的预先配置好的Tomcat服务器上。

快速搭建

既然是基于Hadoop,那最快的方法就是找个容器环境

https://hub.docker.com/r/gradiant/hdfs

https://github.com/Gradiant/dockerized-hadoop

参考https://github.com/Gradiant/dockerized-hadoop/blob/master/docker-compose.yml

基于这个docker-compose文件和kms的资料【https://hadoop.apache.org/docs/current/hadoop-kms/index.html】,容器环境的设置如下

 

生成秘钥

 keytool -genkey -alias 'kmskey' -keystore ./kms.jks -dname "CN=localhost, OU=localhost, O=localhost, L=SH, ST=SH, C=CN" -keypass demokms -storepass demokms -validity 36500 echo "demokms" > kms.keystore.password

kms-site.xml配置

<?xml version="1.0" encoding="UTF-8"?>

<configuration>

<!-- KMS Backend KeyProvider -->

<property>

<name>hadoop.kms.key.provider.uri</name>

<value>jceks://file@/opt/hadoop/key/kms.jks</value>

<description>

URI of the backing KeyProvider for the KMS.

</description>

</property>

<property>

<name>hadoop.security.keystore.java-keystore-provider.password-file</name>

<value>kms.keystore.password</value>

<description>

If using the JavaKeyStoreProvider, the password for the keystore file.

</description>

</property>

 

<property>

<name>dfs.encryption.key.provider.uri</name>

<value>kms://http@172.19.0.10:9600/kms</value>

</property>

 

<property>

<name>hadoop.kms.authentication.type</name>

<value>simple</value>

<description>

Authentication type for the KMS. Can be either "simple"

or "kerberos".

</description>

</property>

</configuration>

docker-copose配置和启动

# https://github.com/Gradiant/dockerized-hadoop

# http://localhost:50070 for hadoop 2.x

# http://localhost:9870 for hadoop 3.x

# CORE_CONF_fs_defaultFS hdfs://hostname -f:8020

#

 

version: "3"

services:

namenode:

image: gradiant/hdfs:3.2.1

container_name: hdfs-namenode

environment:

- HDFS_CONF_dfs_replication=1

volumes:

- name:/hadoop/dfs

- ./sources.list:/etc/apt/sources.list

- ./kms-site.xml:/opt/hadoop-3.2.1/etc/hadoop/kms-site.xml

- ./kms.sh:/opt/hadoop/kms.sh

- ./kms.keystore.password:/opt/hadoop-3.2.1/etc/hadoop/kms.keystore.password

command:

- namenode

ports:

- 8020:8020

- 50070:50070

- 9870:9870

- 9600:9600

networks:

hdfs-networks:

ipv4_address: 172.19.0.10

 

datanode-0:

image: gradiant/hdfs:3.2.1

container_name: hdfs-datanode1

environment:

- CORE_CONF_fs_defaultFS=hdfs://namenode:8020

- HDFS_CONF_dfs_replication=1

volumes:

- data-0:/hadoop/dfs

- ./sources.list:/etc/apt/sources.list

command:

- datanode

networks:

hdfs-networks:

ipv4_address: 172.19.0.11

 

volumes:

data-0:

name:

 

networks:

hdfs-networks:

ipam:

driver: default

config:

- subnet: 172.19.0.0/16

 

启动docker-compose up -d

 

依赖的debian源 sources.list

 

deb http://mirrors.aliyun.com/debian/ buster main non-free contrib

deb http://mirrors.aliyun.com/debian-security buster/updates main

deb http://mirrors.aliyun.com/debian/ buster-updates main non-free contrib

deb http://mirrors.aliyun.com/debian/ buster-backports main non-free contrib

 

KMS启动

#如上haoop的服务的启动用户是hdfs,因此kms.jks这个文件的权限和容器一致,不然生成秘钥时权限问题会出错

docker exec -it hdfs-namenode bash -c "mkdir -p /opt/hadoop/key"

docker cp kms.jks hdfs-namenode:/opt/hadoop/key/

 

docker exec -itd hdfs-namenode /opt/hadoop/kms.sh

kms.sh内容如下

#!/bin/bash

nohup hadoop --daemon start kms

Tip

docker exec -u root -it hdfs-namenode bash 可以使用这个命令以root权限进入容器安装一些工具,方便诊断和检查,这个hadoop系统是debian10,很多包没有安装,apt-get update后即可使用aliyun的镜像安装 如netstat apt-get install net-tools

 

如果不出现异常,服务即可使用了

REST访问

参考官方的文档https://hadoop.apache.org/docs/current/hadoop-kms/index.html

 

# ?user.name=hdfs 没有这个会存在授权问题 401

# curl -X GET http://172.19.0.10:9600/kms/v1/keys/names

curl -X GET http://172.19.0.10:9600/kms/v1/keys/names?user.name=hdfs

# curl -i --header "Accept:application/json" -H "Content-Type:application/json" -X GET http://172.19.0.10:9600/kms/v1/keys/names?user.name=hdfs

 

#https://hadoop.apache.org/docs/current/hadoop-kms/index.html

#Create a Key

curl -X POST http://172.19.0.10:9600/kms/v1/keys?user.name=hdfs -H 'Content-Type: application/json' -d'

{

  "name"        : "testkey",

  "cipher"      : "AES_128_CBC",

  "length"      : 128,

  "material"    : "1234567812345678123456",

  "description" : "demo"

}

'

#Get Key Metadata

curl -X GET http://172.19.0.10:9600/kms/v1/key/testkey/_metadata?user.name=hdfs

#Get Current Key

curl -X GET http://172.19.0.10:9600/kms/v1/key/testkey/_currentversion?user.name=hdfs

 

curl -X GET http://172.19.0.10:9600/kms/v1/keys/names?user.name=hdfs

 

#Generate Encrypted Key for Current KeyVersion

curl -X GET "http://172.19.0.10:9600/kms/v1/key/testkey/_eek?eek_op=generate&num_keys=3&user.name=hdfs" | tee -a /tmp/k.json

 

# Decrypt Encrypted Key

#取第一个的key

IV=`jq ".[0].iv" /tmp/k.json`

MAT=`jq ".[0].encryptedKeyVersion.material" /tmp/k.json`

NAME=`jq ".[0].encryptedKeyVersion.name" /tmp/k.json`

 

curl -X POST "http://172.19.0.10:9600/kms/v1/keyversion/testkey@0/_eek?eek_op=decrypt&user.name=hdfs"  -H 'Content-Type: application/json' -d'

{

  "name"        : '${NAME}',

  "iv"          : '${IV}',

  "material"    : '${MAT}'

}

'

 

#Delete Key

curl -X DELETE http://172.19.0.10:9600/kms/v1/key/testkey?user.name=hdfs

总结

通过使用一个容器,快速的将hadoop KMS部署起来,此时其他应用即可快速集成和使用。

如上过程中涉及的rest的访问权限问题没有提及,集成的测试可以完成。

整体上需要用户名/密码或Kerberos认证时,只要rest接口上微调集成即可。

 

posted @ 2021-01-30 09:42  2012  阅读(661)  评论(0编辑  收藏  举报