最后更新2022/03/12
查看用户及授权信息
gcloud auth list
查看当前project
gcloud config list project
以及:
gcloud config list --all
components是啥呢?当成是插件吧,或者说是gcloud支持功能的驱动:
gcloud components list
查看及设置当前缺省zone,region
gcloud config get-value compute/zone
gcloud config set compute/zone
gcloud config get-value compute/region
gcloud config set compute/region
ssh登录vm,因为在cloud shell里第一次使用ssh,会自动先生成ssh key
gcloud compute ssh vm_name --zone us-central1-f
如果是Windows,看看vm是否ready(能使用rdp了)
gcloud compute instances get-serial-port-output vm_name
由于rdp不能用ssh key,rdp登录windows先要reset password
gcloud compute reset-windows-password vm_name --zone xxxx --user admin
查看project信息
gcloud compute project-info describe --project project_name
在cloud shell里现要设置project为当前要用的project,否则省缺是cloud shell自己的project(对,cloud shell也是新生成的一个project里的vm),设置之后用命令生成vm:
gcloud compute instance gcelab2 --machine-type n1-standard-2 --zone $ZONE
注意,ZONE是环境变量,要先设置好:export ZONE=us-central1-a
随时可以用–help或-h拿参数帮助
gcloud compute instance --help
gcloud help compute
gcloud -h
没有插件不能实现对应的功能,那么可以让gcloud自动添加或者提前手工添加。自动添加需要手工添加一个sdk:
sudo apt-get install google-cloud-sdk
再进入interactive模式,这个是测试版,在beta里:
gcloud beta interactive
创建kubenetes cluster,这个过程可能需要5-10分钟,比较慢:
gcloud container clusters create my-cluster
注意:由于gcloud shell与lab的project不是一个,以上命令如果需要project资源,会出错,需要set project:
gcloud config set project xxxx
创建完cluster,需要authorize操作权,奇怪,为啥?
gcloud container clusters get-credentials xxxx
往里面装一个deployment,也就是example的程序:
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
将端口expose出去:
kubectl expose deployment hello-server --type=LoadBalancer --port 8080
查看状态:
kubectl get service
根据输出的external ip和expose的端口,就可以访问查看被执行的hello-server实例了
删除cluster:
gcloud container clusters delete xxxx
下面是两个load balancer的室验:
设置一下省缺的zone和region
gcloud config set compute/zone us-central1-a
gcloud config set compute/region us-central1
创建三个虚机,其中web index显示是不同的,另外注意tag,这个为了以后(网络)统一管理:
gcloud compute instances create www1
–image-family debian-9
–image-project debian-cloud
–zone us-central1-a
–tags network-lb-tag
–metadata startup-script="#! /bin/bash
sudo apt-get update
sudo apt-get install apache2 -y
sudo service apache2 restart
echo ‘<!doctype html>
www1
’ | tee /var/www/html/index.html"
gcloud compute instances create www2
–image-family debian-9
–image-project debian-cloud
–zone us-central1-a
–tags network-lb-tag
–metadata startup-script="#! /bin/bash
sudo apt-get update
sudo apt-get install apache2 -y
sudo service apache2 restart
echo ‘<!doctype html>
www2
’ | tee /var/www/html/index.html"
gcloud compute instances create www3
–image-family debian-9
–image-project debian-cloud
–zone us-central1-a
–tags network-lb-tag
–metadata startup-script="#! /bin/bash
sudo apt-get update
sudo apt-get install apache2 -y
sudo service apache2 restart
echo ‘<!doctype html>
www3
’ | tee /var/www/html/index.html"
防火墙打开,这里用到了tag,一条命令应用到三个虚机:
gcloud compute firewall-rules create www-firewall-network-lb
–target-tags network-lb-tag --allow tcp:80
查看一下分配的外部IP:
gcloud compute instances list
以及web是否OK:
curl http://[IP_ADDRESS]
增加loadbalance功能,需要先来一个单独的外部IP:
gcloud compute addresses create network-lb-ip-1
–region us-central1
定义一个http health check规则:
gcloud compute http-health-checks create basic-check
定义一个目标访问pool(应用basic check规则,对外提供服务的web server pool):
gcloud compute target-pools create www-pool
–region us-central1 --http-health-check basic-check
把web server 加进去:
gcloud compute target-pools add-instances www-pool
–instances www1,www2,www3
再定义如何选择后端的规则(目前只有health check,只保证后端存活的才提供服务,并且规则只是轮询,另一个室验更复杂,可以基于URL):
gcloud compute forwarding-rules create www-rule
–region us-central1
–ports 80
–address network-lb-ip-1
–target-pool www-pool
查一下当前的规则及IP:
gcloud compute forwarding-rules describe www-rule --region us-central1
用检查到的IP测试以上运转正常:
while true; do curl -m1 xxx.xxx.xxx.xxx; done
这是应当能看到提供服务的是依次选择的三个web vm
增加balance规则,先要创建vm template:
gcloud compute instance-templates create lb-backend-template
–region=us-central1
–network=default
–subnet=default
–tags=allow-health-check
–image-family=debian-9
–image-project=debian-cloud
–metadata=startup-script=’#! /bin/bash
apt-get update
apt-get install apache2 -y
a2ensite default-ssl
a2enmod ssl
vm_hostname="$(curl -H “Metadata-Flavor:Google”
http://169.254.169.254/computeMetadata/v1/instance/name)"
echo “Page served from: $vm_hostname” |
tee /var/www/html/index.html
systemctl restart apache2’
基于此template创建instance组。由于要求自动balance,就不能手工单独创建每个后端实例了,需要用统一template,其中size表明几个后端实例:
gcloud compute instance-groups managed create lb-backend-group
–template=lb-backend-template --size=2 --zone=us-central1-a
创建healthcheck规则(使用google提供的health check访问机,所属IP为130.211.0.0/22 and 35.191.0.0/16,tag是allow-health-check,这个需要穿透防火墙。注:防火墙和loaderbalancer是统一管理的):
gcloud compute firewall-rules create fw-allow-health-check
–network=default
–action=allow
–direction=ingress
–source-ranges=130.211.0.0/22,35.191.0.0/16
–target-tags=allow-health-check
–rules=tcp:80
创建一个对外提供服务的IP(一个新balancer规则,用一个新的IP):
gcloud compute addresses create lb-ipv4-1
–ip-version=IPV4
–global
先试一下IP是啥:
gcloud compute addresses describe lb-ipv4-1
–format=“get(address)”
–global
创建http healthcheck规则:
gcloud compute health-checks create http http-basic-check
–port 80
创建healthcheck服务:
gcloud compute backend-services create web-backend-service
–protocol=HTTP
–port-name=http
–health-checks=http-basic-check
–global
将instance group应用到此服务:
gcloud compute backend-services add-backend web-backend-service
–instance-group=lb-backend-group
–instance-group-zone=us-central1-a
–global
创建url map(即balance将根据url选择后端vm):
gcloud compute url-maps create web-map-http
–default-service web-backend-service
balancer由proxy完成:
gcloud compute target-http-proxies create http-lb-proxy
–url-map web-map-http
配置proxy的转发规则:
gcloud compute forwarding-rules create http-content-rule
–address=lb-ipv4-1
–global
–target-http-proxy=http-lb-proxy
–ports=80