刻苦学习aws资料总结
刻苦学习aws资料总结
以下学习资料仅供参考(企业应用程序上云部署)
搭建步骤
1.VPC(双区域、启用NAT)所有服务尽量私网和跨可用区
2.安全组创建(注意命名 具体入规则可以0.0.0.0 mem11211 redis6379 efs2049 https443 mysql3306 psql5432 web80)先创rds/cache/ec2/elb/efs的安全组 然后 记得写标签 五个安全组
3.RDS(1.私有子网组 创建数据库,启用加密、启用自动备份、启动日志、记得标签)
4.mem redis(私网,双区域、记得写标签,reids 一个主一个副本,启动日志)启用加密就连不上了
5.EC2(注意角色 efs需要代理)
6.efs(创建)挂载日志文件 /etc/rc.local
7.tg(健康检查成功次数调少、失败次数调少、检查间隔大于响应超时一点)
8.负载均衡器(启用删除保护、用自己的安全组)
9.cloudfornt(cookie、查询字符串、源盾)
10.启动配置(启用监控、不分配共有IP用ec2的安全组)
11.ASG(选私网、开ELB指标、减少宽限期、不要用预热实例、终止策略最旧或最新、终止冷却时间30)
12.目标组(取消注册延迟0秒 负载平衡算法 最少未完成请求)
13.动态策略(ELB 步进策略RequestCount a>60=2 a>120=3 a>180=4 a>240=5 a>300=6 a>360=7 a>420=8 a>480=9 a>540=10 )
14.安全组修改(elb安全组0.0.0.0:7777 ec2安全组elbsg:7777 rds安全组ec2:port cache安全组ec2:port efs安全组ec2:port )
参考代码
1.findEgg.sh(日志处理)
#!/bin/bash while true;do echo "************************************************************" date # ============================================================== userID=USER_ID gameID=GAME_ID REFUND_ID=`cat server.log | grep "Refund" | awk -F ':' 'END{sub(/^[ ]/,"");sub(/[ ]$/,"");print $4}'|sed 's/ //g'` if ((${#REFUND_ID} >=1)); then echo "************************************************************" echo $REFUND_ID body="{\"user_id\":$userID,\"game_id\":$gameID,\"refund_id\":\"$REFUND_ID\"}" curl -i -H "Accept: application/json" -X POST -d $body http://日志处理的url echo "" > server.log fi sleep 3; done
2.lambda python3.8(日志处理)
import os import boto3 import urllib.request import json s3_client = boto3.client('s3') def lambda_handler(event, context): bucket_name = event['Records'][0]['s3']['bucket']['name'] file_key = event['Records'][0]['s3']['object']['key'] temp_file = '/tmp/uuid.txt' s3_client.download_file(bucket_name, file_key, temp_file) with open(temp_file) as file: request_ids = file.read().strip().split('\n') for request_id in request_ids: print(request_id) url = 'http://101.43.14.68:8888/niche/put' headers = {'Accept': 'application/json'} data = '{"UserId":1,"GameId":55,"NicheId":"'+request_id+'"}' req = urllib.request.Request(url, data.encode(), headers=headers) response = urllib.request.urlopen(req) print(response.read().decode()) return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') }
3.runalls.sh(容器部署)
./findEgg.sh & ./server_demo
4.Dockerfile(容器部署)
from centos workdir /root copy server_demo server_demo copy findEgg.sh findEgg.sh copy runallEgg.sh runallEgg.sh copy conf.toml conf.toml expose 7777 cmd ["/bin/bash","runallEgg.sh"]
5.python 插入、读取dynamodb
import boto3 dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('aaaa') #插入
response = table.update_item( Key={ 'mykey': mykey, }, UpdateExpression='SET keylambda = :val1', ExpressionAttributeValues={ ':val1': msg } )
#读取
res = table.get_item(
Key={
'mykey': mykey,
}
)
6.python 插入mysql(需要pymysql库)
import pymysql def insert(sql): conn = get_sql() cur = conn.cursor() cur.execute(sql) conn.commit() print("插入成功") cur.close() conn.close() return "success" def get_sql(): lianjie = pymysql.connect( user="用户名", password="密码", host="数据库地址", port=3306, charset="utf8mb4", database="数据库名", ) return lianjie
7.python 插入psql(需要psycopg2库)
import psycopg2 def get_conn(): conn = psycopg2.connect( host="your-host", port="your-port", database="your-database", user="your-username", password="your-password" ) return conn def execute(sql): conn=get_conn() cur = conn.cursor() # 查询示例 cur.execute(sql) rows = cur.fetchall() for row in rows: print(row) # 插入示例 #cur.execute("INSERT INTO your-table (column1, column2) VALUES (%s, %s)", ('value1', 'value2')) # 提交更改 conn.commit() # 关闭游标和连接 cur.close() conn.close()
8.python 处理图片以及存储、读取s3(需要Pil库)
from Pil import Image import json import boto3 def lambda_handler(event,context): s3 =boto3.resource('s3') bucketname = event['Records'][0]['messageAttributes']['bucketname']['stringValue'] picname = event['Records'][0]['messageAttributes']['picname']['stringValue'] print(bucketname) print(picname) s3.Object(bucketname,picname).download_file('/tmp/'+picname) img = Image.open("/tmp/"+picname,"r") img1 = img.convert('L') img1.save("/tmp/修改保存的名称") s3.Object(bucketname,'上传到s3后的图片名称').upload_file('/tmp/修改后保存的名称')
9.ec2-userdata
#!/bin/bash sudo -i cd /root yum -y install httpd memcached systemctl start httpd systemctl start memcached systemctl enable httpd systemctl enable memcached wget http://101.43.14.68/k_server -O /root/server_demo wget http://101.43.14.68/conf.toml -O /root/conf.toml chmod a+x /root/server_demo sed -i 's/log_path = "/root"/log_path = "/root/log"/g' conf.toml sed -i 's/memcache_host = "127.0.0.1"/memcache_host = "127.0.0.1"/g' conf.toml echo "sudo -i" >> /etc/rc.local echo "cd /root" >> /etc/rc.local echo "nohup ./server_demo &" >> /etc/rc.local chmod a+x /etc/rc.local nohup ./server_demo &
10.Cloudformation
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"VpcName": {
"Type": "String",
"Default": "demo-vpc",
"Description": "Name of the VPC"
},
"Region": {
"Type": "String",
"Default": "cn-northwest-1",
"Description": "AWS Region"
},
"DemoSecurityGroupName": {
"Type": "String",
"Default": "server_sg",
"Description": "Name of the Demo Security Group"
},
"CacheSecurityGroupName": {
"Type": "String",
"Default": "cache_sg",
"Description": "Name of the Cache Security Group"
},
"RDSSecurityGroupName": {
"Type": "String",
"Default": "db_sg",
"Description": "Name of the RDS Security Group"
},
"ALBSecurityGroupName": {
"Type": "String",
"Default": "alb_sg",
"Description": "Name of the ALB Security Group"
},
"EFSSecurityGroupName": {
"Type": "String",
"Default": "efs_sg",
"Description": "Name of the EFS Security Group"
},
"DBSubnetGroupName": {
"Type": "String",
"Default": "Db-Subnet-Group",
"Description": "Name of the DB Subnet Group"
},
"CacheSubnetGroupName": {
"Type": "String",
"Default": "Cache-Subnet-Group",
"Description": "Name of the Cache Subnet Group"
}
},
"Resources": {
"VPC": {
"Type": "AWS::EC2::VPC",
"Properties": {
"CidrBlock": "10.0.0.0/16",
"EnableDnsHostnames": true,
"EnableDnsSupport": true,
"Tags": [
{
"Key": "Name",
"Value": {
"Ref": "VpcName"
}
}
]
}
},
"PublicSubnet1": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"CidrBlock": "10.0.1.0/24",
"AvailabilityZone": {
"Fn::Select": [
0,
{
"Fn::GetAZs": {
"Ref": "Region"
}
}
]
},
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${VpcName}-PublicSubnet1"
}
}
]
}
},
"PublicSubnet2": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"CidrBlock": "10.0.2.0/24",
"AvailabilityZone": {
"Fn::Select": [
1,
{
"Fn::GetAZs": {
"Ref": "Region"
}
}
]
},
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${VpcName}-PublicSubnet2"
}
}
]
}
},
"PrivateSubnet1": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"CidrBlock": "10.0.3.0/24",
"AvailabilityZone": {
"Fn::Select": [
0,
{
"Fn::GetAZs": {
"Ref": "Region"
}
}
]
},
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${VpcName}-PrivateSubnet1"
}
}
]
}
},
"PrivateSubnet2": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"CidrBlock": "10.0.4.0/24",
"AvailabilityZone": {
"Fn::Select": [
1,
{
"Fn::GetAZs": {
"Ref": "Region"
}
}
]
},
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${VpcName}-PrivateSubnet2"
}
}
]
}
},
"InternetGateway": {
"Type": "AWS::EC2::InternetGateway",
"Properties": {
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${VpcName}-InternetGateway"
}
}
]
}
},
"VPCGatewayAttachment": {
"Type": "AWS::EC2::VPCGatewayAttachment",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"InternetGatewayId": {
"Ref": "InternetGateway"
}
}
},
"PublicRouteTable": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${VpcName}-PublicRouteTable"
}
}
]
}
},
"PublicRoute": {
"Type": "AWS::EC2::Route",
"Properties": {
"RouteTableId": {
"Ref": "PublicRouteTable"
},
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": {
"Ref": "InternetGateway"
}
}
},
"PrivateRouteTable1": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${VpcName}-PrivateRouteTable1"
}
}
]
}
},
"PrivateRoute1": {
"Type": "AWS::EC2::Route",
"Properties": {
"RouteTableId": {
"Ref": "PrivateRouteTable1"
},
"DestinationCidrBlock": "0.0.0.0/0",
"NatGatewayId": {
"Ref": "NATGateway1"
}
}
},
"PrivateRouteTable2": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${VpcName}-PrivateRouteTable2"
}
}
]
}
},
"PrivateRoute2": {
"Type": "AWS::EC2::Route",
"Properties": {
"RouteTableId": {
"Ref": "PrivateRouteTable2"
},
"DestinationCidrBlock": "0.0.0.0/0",
"NatGatewayId": {
"Ref": "NATGateway2"
}
}
},
"PublicSubnetRouteTableAssociation1":{
"Type":"AWS::EC2::SubnetRouteTableAssociation",
"Properties":{
"SubnetId":{
"Ref":"PublicSubnet1"
},
"RouteTableId":{
"Ref":"PublicRouteTable"
}
}
},
"PublicSubnetRouteTableAssociation2":{
"Type":"AWS::EC2::SubnetRouteTableAssociation",
"Properties":{
"SubnetId":{
"Ref":"PublicSubnet2"
},
"RouteTableId":{
"Ref":"PublicRouteTable"
}
}
},
"PrivateSubnetRouteTableAssociation1":{
"Type":"AWS::EC2::SubnetRouteTableAssociation",
"Properties":{
"SubnetId":{
"Ref":"PrivateSubnet1"
},
"RouteTableId":{
"Ref":"PrivateRouteTable1"
}
}
},
"PrivateSubnetRouteTableAssociation2":{
"Type":"AWS::EC2::SubnetRouteTableAssociation",
"Properties":{
"SubnetId":{
"Ref":"PrivateSubnet2"
},
"RouteTableId":{
"Ref":"PrivateRouteTable2"
}
}
},
"DemoSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Demo security group",
"GroupName": {
"Fn::Sub": "${DemoSecurityGroupName}"
},
"VpcId": {
"Ref": "VPC"
},
"SecurityGroupIngress": [
{
"IpProtocol": "tcp",
"FromPort": 22,
"ToPort": 22,
"CidrIp": "0.0.0.0/0"
},
{
"IpProtocol": "tcp",
"FromPort": 7777,
"ToPort": 7777,
"CidrIp": "0.0.0.0/0"
}
],
"SecurityGroupEgress": [
{
"IpProtocol": "-1",
"CidrIp": "0.0.0.0/0"
}
],
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${DemoSecurityGroupName}"
}
}
]
}
},
"CacheSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Cache security group",
"GroupName": {
"Fn::Sub": "${CacheSecurityGroupName}"
},
"VpcId": {
"Ref": "VPC"
},
"SecurityGroupIngress": [
{
"IpProtocol": "tcp",
"FromPort": 11211,
"ToPort": 11211,
"SourceSecurityGroupId": {
"Fn::GetAtt": [
"DemoSecurityGroup",
"GroupId"
]
}
}
],
"SecurityGroupEgress": [
{
"IpProtocol": "-1",
"CidrIp": "0.0.0.0/0"
}
],
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${CacheSecurityGroupName}"
}
}
]
}
},
"RDSSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "RDS security group",
"GroupName": {
"Fn::Sub": "${RDSSecurityGroupName}"
},
"VpcId": {
"Ref": "VPC"
},
"SecurityGroupIngress": [
{
"IpProtocol": "tcp",
"FromPort": 3306,
"ToPort": 3306,
"SourceSecurityGroupId": {
"Fn::GetAtt": [
"DemoSecurityGroup",
"GroupId"
]
}
},
{
"IpProtocol": "tcp",
"FromPort": 5432,
"ToPort": 5432,
"SourceSecurityGroupId": {
"Fn::GetAtt": [
"DemoSecurityGroup",
"GroupId"
]
}
}
],
"SecurityGroupEgress": [
{
"IpProtocol": "-1",
"CidrIp": "0.0.0.0/0"
}
],
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${RDSSecurityGroupName}"
}
}
]
}
},
"ALBSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "ALB security group",
"GroupName": {
"Fn::Sub": "${ALBSecurityGroupName}"
},
"VpcId": {
"Ref": "VPC"
},
"SecurityGroupIngress": [
{
"IpProtocol": "tcp",
"FromPort": 7777,
"ToPort": 7777,
"CidrIp": "0.0.0.0/0"
}
],
"SecurityGroupEgress": [
{
"IpProtocol": "tcp",
"FromPort": 7777,
"ToPort": 7777,
"DestinationSecurityGroupId": {
"Fn::GetAtt": [
"DemoSecurityGroup",
"GroupId"
]
}
}
],
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${ALBSecurityGroupName}"
}
}
]
}
},
"EFSSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "EFS security group",
"GroupName": {
"Fn::Sub": "${EFSSecurityGroupName}"
},
"VpcId": {
"Ref": "VPC"
},
"SecurityGroupIngress": [
{
"IpProtocol": "tcp",
"FromPort": 2049,
"ToPort": 2049,
"SourceSecurityGroupId": {
"Fn::GetAtt": [
"DemoSecurityGroup",
"GroupId"
]
}
}
],
"SecurityGroupEgress": [
{
"IpProtocol": "-1",
"CidrIp": "0.0.0.0/0"
}
],
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Sub": "${EFSSecurityGroupName}"
}
}
]
}
},
"CacheSubnetGroup": {
"Type": "AWS::ElastiCache::SubnetGroup",
"Properties": {
"Description": "Cache Subnet Group",
"CacheSubnetGroupName": {
"Ref": "CacheSubnetGroupName"
},
"SubnetIds": [
{
"Ref": "PrivateSubnet1"
},
{
"Ref": "PrivateSubnet2"
}
],
"Tags": [
{
"Key": "Name",
"Value": "CacheSubnetGroup"
}
]
}
},
"DBSubnetGroup": {
"Type": "AWS::RDS::DBSubnetGroup",
"Properties": {
"DBSubnetGroupDescription": "Database subnet group",
"DBSubnetGroupName": {
"Ref": "DBSubnetGroupName"
},
"SubnetIds": [
{
"Ref": "PrivateSubnet1"
},
{
"Ref": "PrivateSubnet2"
}
],
"Tags": [
{
"Key": "Name",
"Value": "DBSubnetGroup"
}
]
}
},
"NATGateway1": {
"Type": "AWS::EC2::NatGateway",
"Properties": {
"AllocationId": {
"Fn::GetAtt": [
"EIP1",
"AllocationId"
]
},
"SubnetId": {
"Ref": "PublicSubnet1"
},
"Tags": [
{
"Key": "Name",
"Value": "NATGateway1"
}
]
}
},
"NATGateway2": {
"Type": "AWS::EC2::NatGateway",
"Properties": {
"AllocationId": {
"Fn::GetAtt": [
"EIP2",
"AllocationId"
]
},
"SubnetId": {
"Ref": "PublicSubnet2"
},
"Tags": [
{
"Key": "Name",
"Value": "NATGateway2"
}
]
}
},
"EIP1": {
"Type": "AWS::EC2::EIP",
"Properties": {
"Domain": "vpc"
}
},
"EIP2": {
"Type": "AWS::EC2::EIP",
"Properties": {
"Domain": "vpc"
}
}
},
"Outputs": {
"VPCId": {
"Value": {
"Ref": "VPC"
},
"Description": "VPC ID"
},
"PublicSubnet1Id": {
"Value": {
"Ref": "PublicSubnet1"
},
"Description": "Public Subnet 1 ID"
},
"PublicSubnet2Id": {
"Value": {
"Ref": "PublicSubnet2"
},
"Description": "Public Subnet 2 ID"
},
"PrivateSubnet1Id": {
"Value": {
"Ref": "PrivateSubnet1"
},
"Description": "Private Subnet 1 ID"
},
"PrivateSubnet2Id": {
"Value": {
"Ref": "PrivateSubnet2"
},
"Description": "Private Subnet 2 ID"
},
"DemoSecurityGroupId": {
"Value": {
"Fn::GetAtt": [
"DemoSecurityGroup",
"GroupId"
]
},
"Description": "Demo Security Group ID"
},
"CacheSecurityGroupId": {
"Value": {
"Fn::GetAtt": [
"CacheSecurityGroup",
"GroupId"
]
},
"Description": "Cache Security Group ID"
},
"RDSSecurityGroupId": {
"Value": {
"Fn::GetAtt": [
"RDSSecurityGroup",
"GroupId"
]
},
"Description": "RDS Security Group ID"
},
"ALBSecurityGroupId": {
"Value": {
"Fn::GetAtt": [
"ALBSecurityGroup",
"GroupId"
]
},
"Description": "ALB Security Group ID"
},
"EFSSecurityGroupId": {
"Value": {
"Fn::GetAtt": [
"EFSSecurityGroup",
"GroupId"
]
},
"Description": "EFS Security Group ID"
}
}
}
扩充
1./etc/rc.local
脚本已被弃用解决方案
在较新的 Amazon Linux 2 AMI 或其他基于 systemd 的 Linux 发行版上,/etc/rc.local
脚本已被弃用,并不再作为系统启动时自动运行脚本的方式。因此,将挂载 EFS 文件系统的命令添加到 /etc/rc.local
文件中可能不 会生效。
相反,您可以通过创建一个 systemd 服务单元来在系统启动时自动挂载 EFS 文件系统。以下是一个示例的 systemd 服务单元配置:
-
创建服务单元文件:使用 root 权限创建一个新的服务单元文件,例如
/etc/systemd/system/efs-mount.service
。 -
sudo vi /etc/systemd/system/efs-mount.service
- 将以下内容添加到服务单元文件中:
[Unit] Description=EFS Mount [Service] Type=oneshot ExecStart=/usr/bin/mount -t efs fs-12345678:/ /mnt/efs [Install] WantedBy=multi-user.target
-
在
ExecStart
行中,将fs-12345678
替换为您的 EFS 文件系统的文件系统 ID,将/mnt/efs
替换为您要挂载的目标路径。
-
- 保存并关闭文件
启用服务:运行以下命令启用服务。
sudo systemctl enable efs-mount
在系统重新启动后,systemd 服务将自动运行,并尝试挂载指定的 EFS 文件系统到目标路径。您可以检查 /mnt/efs
目录是否成功挂载了 EFS 文件系统。请注意,您需要确保 EC2 实例的安全组和网络设置允许与 EFS 文件系 统进行通信。
请注意,上述示例假设您的操作系统使用 systemd 作为 init 系统。如果您使用的是其他 init 系统(如 SysV),则需要相应地调整服务单元文件的配置。
2.数据库用户权限管理
- MySQL
创建一个新用户,并授予对指定数据库的读写权限:
CREATE USER 'username'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON database_name.* TO 'username'@'%'; FLUSH PRIVILEGES;
将 username
替换为您要创建的用户名,password
替换为用户的密码,database_name
替换为您要授权用户访问的数据库名称。
创建一个普通索引:
CREATE INDEX index_name ON table_name (column_name);
2.PostgreSQL
创建一个新用户,并授予对指定数据库的读写权限:
CREATE USER username WITH PASSWORD 'password';
GRANT CONNECT ON DATABASE <数据库名> TO <用户名>;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO <用户名>;
创建一个普通索引:
CREATE INDEX index_name ON table_name (column_name);
显示数据库列表:\l
切换到指定数据库:\c database_name
显示当前连接的数据库:\c
显示表列表:\dt
显示指定表的结构:\d table_name
执行 SQL 查询:SELECT * FROM table_name;
退出 PostgreSQL 客户端:\q
3.安装kubectl
4.安装eksctl
#打开终端或命令行界面,并执行以下命令以下载eksctl二进制文件 国内的话只能用下面这个不然下载不下来
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
curl http://101.43.14.68/eksctl_Linux_amd64.tar.gz | tar xz -C /tmp
#将eksctl二进制文件复制到适当的目录 sudo mv /tmp/eksctl /usr/bin eksctl version #注意环境变量
eks 操作
--- AWS CLI 下载: 下载安装包,并安装(一路点击 下一步next):https://awscli.amazonaws.com/AWSCLIV2.msi 教程:https://docs.aws.amazon.com/zh_cn/cli/latest/userguide/getting-started-install.html AWS CLI 客户端 配置 C:\Users\你的用户名\.aws 比如:C:\Users\Administrator\.aws 如果没有.aws文件夹,就自己新建 新建2个文本文件(注意:没有txt后缀。2个文件的第一行都是[default],很容易忘写):config、credentials config文件内容: [default] region = cn-northwest-1 output = json或text credentials文件内容(必须有3个key:AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_SESSION_TOKEN。从云创网站复制,并去除行首的"export ")例子如下: [default] AWS_ACCESS_KEY_ID=ASIAVOLKKUE2ZPAA27UV AWS_SECRET_ACCESS_KEY=eCytoFSkv6i7SvTvJe8lAIU0aGkGHb2yNtd52LuN AWS_SESSION_TOKEN=IQoJb3JpZ2luX2VjEEsaDmNuLW5vcnRod2VzdC0xIkgwRgIhAM7jv6TRjBhqRmObwVWhyURCRLEiZPruUDqBGt/KhMRZAiEA+fR2dz/CfWGE/BKSQ/+Y63gnk+BXBfaq2yJQfBgXRPYqtgIIeBACGgwzNzQ0MjIwMjA0MDUiDEtSvmsq6ZykXcVyUSqTAiOzdQU5kflUuIoGBN84/2nAeXKjAbRxx3XpXb2r7G0KbN4fJUzI7Bw2Gp0ti/SaWZQAvwfhGrwHEZplEIXuyjNu2uK436ScH4zLxqPCB4XBLAmtkWo6V0cv2Jx7USkjj+1yibeB4aPOOREGmRiYVRjvNuebCsq/gQGiSdEuWzEAa7i2rqd4GzXOly/J8Wo0dswBFxLTcsTTn3c3julQJdYF9KHloVPa3CCro/exexzAZZShRW4p3ujtdxcIf2mnv8ar82b6v3+ICgeA3D2I+cUjraYJcXmSn0lksszLgmhm+yURfIqWSuIbbZvI97YLCDDglMori5w7QrJ7OMwVhWBdLOU2IAXCIXelkAicPHckkc5bMI7Mt5oGOtUBPNC7Y+cZz4L+HvxBwZuhABWeWt8fcfRqiT3npZSqlnHJtxR8bcnPeJMfo1k9NBaE9Wv6mxdI4XUelfdatdv0teFUaLLmazBmcLjv4+RrvlnpxUwMjh691L1N+bG85hsIDqIsftrjRj+d8Grfk6PoharXPC43akNPNJDDFtkqm4oSZWKP2yPZAPvQS9ClUw39jCuM0jz2qpQT9WcF5KynL/ySPbDB4ItVslSadjlNYEo2nD4nX1imSS3eVtoxWJfZjhZsO4t2ZGTw+6ZO3LdYcd+Am7Hf --- 验证AWS CLI配置是否成功: aws s3 ls 没有报错即可。 aws sts get-caller-identity 输出内容(可以看到aws账号的用户名wsc_cstor): { "UserId": "374422020405:wsc_cstor", "Account": "374422020405", "Arn": "arn:aws-cn:sts::374422020405:federated-user/wsc_cstor" } --- Kubernetes(k8s) ---EKS一条龙---by-wld--- 打开EKS页面: https://cn-northwest-1.console.amazonaws.cn/eks/home?region=cn-northwest-1 url解读:url中的cn-northwest-1 是中国 (宁夏)的区域代码 EKS集群创建过程: 1.创建 集群 (需要集群角色) 创建 集群角色eksClusterRole-wld https://docs.amazonaws.cn/eks/latest/userguide/service_IAM_role.html#create-service-role 选择你创建的集群角色,一路 下一步,啥都不用设置,即可。 集群的状态,处于"正在创建" (耗时10-15分钟) 2.添加节点组 (需要节点角色) 创建 托管节点组 https://docs.amazonaws.cn/eks/latest/userguide/create-managed-node-group.html https://docs.amazonaws.cn/eks/latest/userguide/create-node-role.html#create-worker-node-role 创建 节点角色EKSNodeRole-wld 需要的权限策略(Policy)有3个,少一个Policy就会导致 创建节点组失败: AmazonEKS_CNI_Policy AmazonEKSWorkerNodePolicy 工作节点EC2 是干活的 用于运行容器 AmazonEC2ContainerRegistryReadOnly https://console.amazonaws.cn/iamv2/home?region=cn-northwest-1#/roles 角色(role)的名字 以 "-姓名首字母缩写" 结尾(比如:eksClusterRole-wld, EKSNodeRole-wld),这样用wld搜索角色,就可以快速找到 自己创建的角色 实例类型 选择 t3.micro。t3.medium也可以 t3.micro类型的工作节点 资源不足 只能起2个pod,t3.medium类型的工作节点 能起3个pod AMI(Amazon Machine Image) 类型 默认是 Amazon Linux 2 (AL2_x86_64) 子网 不用设置,默认即可 创建密钥对,私钥文件格式 选择 .pem。.ppk格式只能用putty登录 允许来自以下源的 SSH 远程访问:选择 "所有" 节点组的状态,处于"正在创建" (耗时10分钟左右) --- --- 关联你的k8s集群,或者说 关联eks集群 aws eks update-kubeconfig --region 区域代号 --name 集群名字 aws eks update-kubeconfig --region cn-northwest-1 --name j1 --- 相关文档: 为 Amazon EKS 创建 kubeconfig https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/create-kubeconfig.html 使用 AWS CLI 创建 kubeconfig aws eks update-kubeconfig --region region-code --name my-cluster 测试配置: kubectl get svc --- 验证集群关联是否成功: PS C:\Users\Administrator> kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-node-f2shq 1/1 Running 0 33m kube-system aws-node-g674q 1/1 Running 0 33m kube-system aws-node-kwmbp 1/1 Running 0 33m kube-system coredns-694f75f5bb-qmc44 1/1 Running 0 46m kube-system coredns-694f75f5bb-rc99c 1/1 Running 0 46m kube-system kube-proxy-9d9pc 1/1 Running 0 33m kube-system kube-proxy-jxcjx 1/1 Running 0 33m kube-system kube-proxy-sg7r4 1/1 Running 0 33m 通过deployment起3个pod,镜像为flask网站 kubectl.exe apply -f .\d1.yaml ——————————d1.yaml-------------------- apiVersion: apps/v1 kind: Deployment metadata: name: f1 #Deployment的名字 labels: app: f1-deploy #给Deployment打的标签 spec: replicas: 9 #要创建的pod的个数 selector: matchLabels: app: f1 #需要管理的pod的标签 template: metadata: labels: app: f1 #给pod打的标签 spec: containers: #每一个pod中要起的容器 - name: c1 #容器的名字 image: 147469501191.dkr.ecr.cn-northwest-1.amazonaws.com.cn/test:latest # 容器的镜像地址 ports: - containerPort: 7777 #容器中暴露的端口号 --------------------负载均衡器alb.yaml----------------------- apiVersion: v1 kind: Service metadata: name: my-service spec: type: LoadBalancer ports: - port: 7777 targetPort: 7777 # 指定将流量转发到容器的端口 protocol: TCP name: alb selector: app: f1