【CentOS7离线ansible-playbook自动化安装CDH5.16(内附离线安装包地址,及自动化脚本)】

2023-11-16

CentOS7 离线环境 使用ansible自动部署CDH5.16

前言

本文介绍如何使用作者开发的自动化脚本,离线部署cdh集群。只需要简单的配置下yum源和cdh集群节点IP等几个参数,就可实现一键部署cdh集群,省去配置mysql、ntp服务、主机配置、cdh文件分发等繁杂操作,安装过程快速便捷。我自己测试,三个节点的集群,cdh安装不超过15分钟。

注意: 主机配置逻辑存储卷并不在自动化范围内。如您已配置逻辑卷或不需要逻辑卷,请继续往下。(建议先做逻辑卷,以后好扩容存储,以后有时间,我会把配置逻辑卷也加到脚本里。笔者使用的磁盘由ceph的块存储服务提供,可自由扩容。)

简介

整个cdh集群的部署由anible-palybook的剧本完成,其中mysql采用docker的部署方式,ntp服务使用chrony搭建。

下载安装包

下载地址:https://pan.baidu.com/s/1yosjmPLZHngL1QFbxV095g
提取码:w4uf
文件大小:3.74GB
安装包包含软件:cdh5.16、ansible2.9.21、docker20.10.7、chrony、mysql5.7、vim等基础工具包以及作者开发的相关自动化脚本。

安装并配置ansible

将安装包发送至目标主机(scm-server节点)/root目录下,并解压

[root@cdh-auto-deploy-test-1 ~]# tar -xvf cdh.5.16.tar
[root@cdh-auto-deploy-test-1 ~]# ll cdh5.16
total 2984412
-rw-r--r--. 1 root root 2127506677 Jun  9 10:14 CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
-rw-r--r--. 1 root root         41 Jun  9 10:13 CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha
-rw-r--r--. 1 root root  841524318 Jun  9 10:14 cloudera-manager-centos7-cm5.16.1_x86_64.tar.gz
drwxr-xr-x. 4 root root        153 Jun 25 15:17 deployfiles
-rw-r--r--. 1 root root       5670 Jun  9 10:14 KAFKA-1.2.0.jar
-rw-r--r--. 1 root root   85897902 Jun  9 10:14 KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel
-rw-r--r--. 1 root root         41 Jun  9 10:14 KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel.sha
-rw-r--r--. 1 root root      66538 Jun  9 10:14 manifest.json
-rw-r--r--. 1 root root       5356 Jun  9 10:14 manifestkafka.json
-rw-r--r--. 1 root root    1007502 Jun  9 10:14 mysql-connector-java-5.1.47.jar

修改yum源指向解压的安装包中的目录

[root@cdh-auto-deploy-test-1 ~]# mkdir /etc/yum.repos.d/back
[root@cdh-auto-deploy-test-1 ~]# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/back/
[root@cdh-auto-deploy-test-1 ~]# cp /root/cdh5.16/deployfiles/yumPackages/software.repo /etc/yum.repos.d/
[root@cdh-auto-deploy-test-1 ~]# vi /etc/yum.repos.d/software.repo

[software]
name=software
## 修改`baseurl`的值,使其指向软件包。
baseurl=file:///root/cdh5.16/deployfiles/yumPackages/rpmPackages/
enabled=1
gpgcheck=0
[vim]
name=vim
## 修改`baseurl`的值,使其指向解压出的文件。
baseurl=file:///root/cdh5.16/deployfiles/yumPackages/vim/
enabled=1
gpgcheck=0

使用yum安装ansible

[root@cdh-auto-deploy-test-1 ~]# yum install -y ansible vim perl 
[root@cdh-auto-deploy-test-1 ~]# ansible --version
ansible 2.9.21
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Apr  9 2019, 14:30:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
[root@cdh-auto-deploy-test-1 ~]#

修改ansible.cfg,关闭ssh密钥检查。

[root@cdh-auto-deploy-test-1 ~]# vim /etc/ansible/ansible.cfg

 host_key_checking = False

修改ansible的hosts文件,配置ansible主机,替换主机IP和密码。

[root@cdh-auto-deploy-test-1 ~]# vim /etc/ansible/hosts

[scm_server]
10.0.5.77 ansible_host=10.0.5.77 hostname=cdh1 ansible_user=root ansible_ssh_pass=12345 ansible_connection=local

[scm_agent]
10.0.5.74 ansible_host=10.0.5.74 hostname=cdh2 ansible_user=root ansible_ssh_pass=12345

[cdh:children]
scm_server
scm_agent

[db:children]
scm_server

测试主机通信,ping所有节点:

[root@cdh-auto-deploy-test-1 ~]# ansible all -m ping
cdh1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
cdh2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
[root@cdhdeploytest-2 yumPackages]#

安装cdh5.16

修改/root/cdh5.16/deployfiles/vars.yaml,配置参数:

##变量存储文件
#

#vm_type代表ansible节点的虚拟机类型。
#当前版本有效值为“centos7”。
vm_type: "centos7"

#当有可用的ntp服务器时填写服务器的ip,没有不填。不填(为空)时,将同步scm-server所在节点的时间同步至其它节点。
ntp_server:

#安装包(解压后)所在目录。
cdh_packages_dir: "/root"

#数据盘目录。docker的持久卷、cdh的scm等将存放在此目录下。
cdh_data_dir: "/opt"

使用ansible-playbook自动安装cdh5.16

[root@cdh-auto-deploy-test-1 ~]# ansible-playbook /root/cdh5.16/deployfiles/deploy-cdh.yaml

注意:
  /root/cdh5.16/deployfiles/deploy-cdh.yaml是自动化部署cdh的ansible脚本。
  可以自行修改此脚本,定制部署自己的cdh。
  mysql用户密码、docker卷目录都在此文件中修改。

当脚本结束后,执行 tail -200f /opt/cloudera-manager/cm-5.16.1/log/cloudera-scm-server/cloudera-scm-server.log查看scn-server的日志,大约几分钟后出现如下内容表示sever启动完成:

Started SelectChannelConnector@0.0.0.0:7180
Started Jetty server.
ScmActive completed successfully.
Discovered parcel on CM server: CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
Created torrent file: /opt/cloudera/parcel-repo/CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.torrent
 Creating single-file torrent for CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel...
 Hashing data from CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel with 4 threads (4058 pieces)...
   ... 10% complete
   ... 20% complete
   ... 30% complete
   ... 40% complete
   ... 50% complete
   ... 60% complete
   ... 70% complete
   ... 80% complete
   ... 90% complete
 Hashed 1 file(s) (2127506677 bytes) in 4058 pieces (4058 expected) in 6605.6ms.
 Single-file torrent information:
   Torrent name: CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
   Announced at: Seems to be trackerless
   Created on..: Fri Jun 25 16:49:59 CST 2021
   Created by..: cm-server
   Pieces......: 4058 piece(s) (524288 byte(s)/piece)
   Total size..: 2,127,506,677 byte(s)
calParcelManagerImpl: Discovered parcel on CM server: KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel
calParcelManagerImpl: Created torrent file: /opt/cloudera/parcel-repo/KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel.torrent
 Creating single-file torrent for KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel...
 Hashing data from KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel with 4 threads (164 pieces)...
   ... 10% complete
   ... 20% complete
   ... 30% complete
   ... 40% complete
   ... 50% complete
   ... 60% complete
   ... 70% complete
   ... 80% complete
   ... 90% complete
 Hashed 1 file(s) (85897902 bytes) in 164 pieces (164 expected) in 277.9ms.
 Single-file torrent information:
   Torrent name: KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel
   Announced at: Seems to be trackerless
   Created on..: Fri Jun 25 16:50:06 CST 2021
   Created by..: cm-server
   Pieces......: 164 piece(s) (524288 byte(s)/piece)
   Total size..: 85,897,902 byte(s)

现在可以登录cdh1:7180部署集群了。用户:admin 密码:admin

备注

/opt/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-agent status 查看agent状态
/opt/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-server status 查看server状态

附:安装包中的自动化脚本

时间仓促,脚本不够优雅,欢迎评论区留言优化!

deploy-cdh.yaml

---
- hosts: all

 vars_files: 
   - ./vars.yaml

 tasks:

   - name: send centos7init to all node
     synchronize:
       src: "{{cdh_packages_dir}}/cdh5.16/deployfiles/centos7init.sh"
       dest: "{{cdh_data_dir}}/"

   - name: init all node with centos7init
     shell: bash "{{cdh_data_dir}}/centos7init.sh"
     register: initinfo
     ignore_errors: yes

   - name: initinfo
     debug:
       msg:
         - "return code is {{initinfo.rc}}"
         - "{{initinfo.stdout_lines}}"
 
   - name: set hostname
     shell: hostnamectl set-hostname "{{hostname|quote}}"

## 此任务禁用,请使用下面同名的任务。因为blockfile模块可以避免重复插入。
#    - name: set hosts
#      shell: echo "{{item.key}} {{item.value.hostname}}" >> /etc/hosts
#      with_dict:
#        - "{{hostvars}}"

   - name: set hosts
     blockinfile:
       path: /etc/hosts
       block: |
         {% for item in hostvars %}
         {{hostvars[item]['ansible_host']}} {{hostvars[item]['hostname']}}
         {% endfor %}
       state: present

   - name: disable selinux
     shell: setenforce 0
     ignore_errors: yes

   - name: set selinux config
     shell: sed -i 's/SELINUX=enforcing/SELINUX=disable/g' /etc/selinux/config
     ignore_errors: yes

   - name: shutdown firewalld
     shell: systemctl stop firewalld
     ignore_errors: yes

   - name: disable firewall
     shell: systemctl disable firewalld
     ignore_errors: yes
     
   - name: send yum packages to all node
     synchronize:
       src: "{{cdh_packages_dir}}/cdh5.16/deployfiles/yumPackages"
       dest: /root/

   - name: check yum repo
     shell: ls /etc/yum.repos.d | grep software.repo
     register: repos
     ignore_errors: yes
 
   - name: make back repo
     shell: mkdir -p /etc/yum.repos.d/back
     ignore_errors: yes

   - name: back yum repo
     shell: mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/back/
     when: repos.rc != 0
     ignore_errors: yes 

   - name: set yum repo
     synchronize:
       src: /root/yumPackages/software.repo
       dest: /etc/yum.repos.d/software.repo
     when: repos.rc != 0

   - name: install openjdk1.8 chrony psmisc use yum
     yum: 
       name: 
         - perl
         - psmisc
         - chrony
         - java-1.8.0-openjdk.x86_64
       state: present

   - name: set openjdk ssl
     shell: sed -i 's/jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4/jdk.tls.disabledAlgorithms=RC4/g' /usr/lib/jvm/jre-1.8.0-openjdk/lib/security/java.security 

   - name: set chrony.conf
     template:
       src: chrony.conf.j2
       dest: "/etc/chrony.conf"
     when: vm_type == "centos7"

   - name: sync time
     shell: systemctl {{item}}
     with_items:
       - "enable chronyd"
       - "restart chronyd"

- hosts: db

 vars_files:
   - ./vars.yaml
 
 tasks:

   - name: install docker-ce
     yum:
       name: 
         - docker-ce
       state: present

   - name: enable docker
     shell: systemctl enable docker

   - name: check docker status
     shell: systemctl status docker
     register: dockerstat
     ignore_errors: yes

   - name: check mysql status
     shell: docker ps -a | grep mysql
     register: mysqlstat
     ignore_errors: yes

   - name: print mysqlstat.rc
     debug:
       msg: 
         - "{{mysqlstat.rc}}"
         - "{{mysqlstat.stdout_lines}}"

   - name: start docker-ce
     shell: service docker restart
     when: dockerstat.rc != 0 

   - name: send mysql.tar 
     synchronize:
       src: "{{cdh_packages_dir}}/cdh5.16/deployfiles/mysql.tar"
       dest: "{{cdh_data_dir}}/mysql.tar"
     when: mysqlstat.rc != 0
   
   - name: load mysql image
     shell: docker load < "{{cdh_data_dir}}"/mysql.tar
     when: mysqlstat.rc != 0

   - name: make dir for mysql volume
     command:
       cmd: mkdir -p {{item}}
     with_items:
     - "{{cdh_data_dir}}/mysql/mysql-config"
     - "{{cdh_data_dir}}/mysql/mysql-data"
     when: mysqlstat.rc != 0

   - name: set my.cnf
     template:
       src: my.cnf.j2
       dest: "{{cdh_data_dir}}/mysql/mysql-config/my.cnf"

   - name: docker restart mysql
     shell: docker restart mysql
     when: mysqlstat.rc == 0

   - name: docker run mysql
     shell: docker run -it -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD="1qaz2wsx" -v "{{cdh_data_dir}}"/mysql/mysql-config:/etc/mysql -v "{{cdh_data_dir}}"/mysql/mysql-data:/var/lib/mysql mysql:5.7.34
     register: result
     when: mysqlstat.rc != 0

   - name: wart for mysql ready
     wait_for:
       timeout: 300
       port: 3306
       delay: 60
       state: drained

   - name: copy init_cdhmsyql.sh
     synchronize:
       src: "{{cdh_packages_dir}}/cdh5.16/deployfiles/init-cdh-server-mysql.sh"
       dest: "{{cdh_data_dir}}/mysql/mysql-config/"
     when: mysqlstat.rc != 0

   - name: create databases and user
     shell: docker exec mysql /bin/bash /etc/mysql/init-cdh-server-mysql.sh
#      when: result.rc == 0  mysqlstat.rc == 0
     ignore_errors: yes


- hosts: cdh

 vars_files:
   - ./vars.yaml
 
 tasks:
   - name: mkdir java
     shell: mkdir -p /usr/share/java

   - name: copy mysql-connector-java
     synchronize:
      src: "{{cdh_packages_dir}}/cdh5.16/mysql-connector-java-5.1.47.jar"
      dest: /usr/share/java/mysql-connector-java.jar

- hosts: scm_server

 vars_files:
   - ./vars.yaml
 
 tasks:       
   - name: create dir
     shell: mkdir -p "{{cdh_data_dir}}/{{item}}"
     with_items:
       - cloudera-manager
       - cloudera/parcel-repo
       - cloudera/parcels

   - name: extract cloudera manager
     shell: tar -zxvf "{{cdh_packages_dir}}"/cdh5.16/cloudera-manager-centos7-cm5.16.1_x86_64.tar.gz -C "{{cdh_data_dir}}"/cloudera-manager/

   - name: cp parcel-repo
     shell: cp "{{cdh_packages_dir}}/cdh5.16/{{item}}" "{{cdh_data_dir}}/cloudera/parcel-repo/"
     with_items:
       - "CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel"
       - "CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha"
       - "KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel"
       - "KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel.sha"
       - "manifest.json"
       - "manifestkafka.json"
       - "KAFKA-1.2.0.jar"

   - name: fix agent file
     shell: sed -i "s/server_host=localhost/server_host={{hostname}}/g" "{{cdh_data_dir}}"/cloudera-manager/cm-5.16.1/etc/cloudera-scm-agent/config.ini

   - name: set scm-server dbproperties
     shell: sed -i "s/^.*com.cloudera.cmf.db.{{ item.name }}=.*$/com.cloudera.cmf.db.{{ item.name }}={{ item.value }}/" "{{cdh_data_dir}}"/cloudera-manager/cm-5.16.1/etc/cloudera-scm-server/db.properties
     with_items:
       - { name: 'type', value: 'mysql' }
       - { name: 'host', value: 'cdh1' }
       - { name: 'name', value: 'cmf' }
       - { name: 'user', value: 'cmf' }
       - { name: 'password', value: '1qaz2wsx' }
       - { name: 'setupType', value: 'EXTERNAL' }
     ignore_errors: yes
       
- hosts: scm_agent

 vars_files:
   - ./vars.yaml
 
 tasks:
   - name: cp files
     synchronize:
       src: "{{cdh_data_dir}}/{{ item }}"
       dest: "{{cdh_data_dir}}/"
     with_items:
       - "cloudera"
       - "cloudera-manager"


- hosts: cdh

 vars_files:
   - ./vars.yaml

 tasks:
   - name: del cloudera-scm
     user:
       name: cloudera-scm
       state: absent

   - name: add user
     shell: useradd --system --home="{{cdh_data_dir}}"/cloudera-manager/cm-5.16.1/run/cloudera-scm-server/ --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm

   - name: change owner of file
     shell: chown -R cloudera-scm:cloudera-scm "{{cdh_data_dir}}/{{item}}"
     with_items:
       - "cloudera"
       - "cloudera-manager"

- hosts: scm_server

 vars_files:
   - ./vars.yaml

 tasks:
   - name: copy scm-server
     synchronize:
       src: "{{cdh_data_dir}}/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-server"
       dest: "/etc/init.d/"

   - name: set scm-server enable
     shell: "chkconfig {{item}}"
     with_items:
       - "--add cloudera-scm-server"
       - "cloudera-scm-server on"

   - name: set scm-server env prameters
     shell: sed -i 's?CMF_DEFAULTS=${CMF_DEFAULTS:-/etc/default}?CMF_DEFAULTS=${CMF_DEFAULTS:-/opt/cloudera-manager/cm-5.16.1/etc/default}?g' /etc/init.d/cloudera-scm-server

   - name: start server
     shell: "{{cdh_data_dir}}/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-server restart"

- hosts: cdh

 vars_files:
   - ./vars.yaml

 tasks:
   - name: copy scm-agent
     synchronize:
       src: "{{cdh_data_dir}}/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-agent"
       dest: "/etc/init.d/"

   - name: set scm-agent enable
     shell: "chkconfig {{item}}"
     with_items:
       - "--add cloudera-scm-agent"
       - "cloudera-scm-agent on"

   - name: set scm-agent env prameters
     shell: sed -i 's?CMF_DEFAULTS=${CMF_DEFAULTS:-/etc/default}?CMF_DEFAULTS=${CMF_DEFAULTS:-/opt/cloudera-manager/cm-5.16.1/etc/default}?g' /etc/init.d/cloudera-scm-agent

   - name: wait for scm-server
     wait_for:
       timeout: 300
       port: 7182
       delay: 60
       state: drained

   - name:  start agent
     shell: "{{cdh_data_dir}}/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-agent restart"

centos7init.sh

#!/bin/bash
cat << EOF
+--------------------------------------------------------------+
|             === Welcome to CentOS  System init ===           |
+--------------------------------------------------------------+
EOF

#set transparent_hugepage
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo 'echo never > /sys/kernel/mm/transparent_hugepage/defrag' >> /etc/rc.local
echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' >> /etc/rc.local


#set limits
cat > /etc/security/limits.conf << EOF
root soft nproc 65535
root hard nproc 65535
* soft nofile 1024000
* hard nofile 1024000
EOF

cat << EOF
+-------------------set limits success-----------------+
EOF


#set sysctl
cat > /etc/sysctl.conf << EOF
fs.file-max = 1024000
vm.swappiness = 0
kernel.sysrq = 0 
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
EOF

/sbin/sysctl -p

cat << EOF
+-------------------set sysctl success-----------------+
EOF

init-cdh-server-myusql.sh

#!/bin/bash

creat_cmf="create database cmf DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';"
creat_hive="create database hive DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';"
creat_oozie="create database oozie DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';"
creat_hue="create database hue DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';"
creat_amon="create database amon DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';"
creat_activity="create database activity DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';"
creat_reports="create database reports DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';"
creat_audit="create database audit DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';"
creat_metadata="create database metadata DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';"

access_auth_cmf="grant all on cmf.* TO 'cmf'@'%' IDENTIFIED BY '1qaz2wsx';"
access_auth_hive="grant all on hive.* TO 'hive'@'%' IDENTIFIED BY '1qaz2wsx';"
access_auth_oozie="grant all on oozie.* TO 'oozie'@'%' IDENTIFIED BY '1qaz2wsx';"
access_auth_hue="grant all on hue.* TO 'hue'@'%' IDENTIFIED BY '1qaz2wsx';"
access_auth_amon="grant all on amon.* TO 'amon'@'%' IDENTIFIED BY '1qaz2wsx';"
access_auth_activity="grant all on activity.* TO 'activity'@'%' IDENTIFIED BY '1qaz2wsx';"
access_auth_reports="grant all on reports.* TO 'reports'@'%' IDENTIFIED BY '1qaz2wsx';"
access_auth_audit="grant all on audit.* TO 'audit'@'%' IDENTIFIED BY '1qaz2wsx';"
access_auth_metadata="grant all on metadata.* TO 'metadata'@'%' IDENTIFIED BY '1qaz2wsx';"
access_auth_remote="grant all PRIVILEGES on *.* to 'root'@'%' identified by '1qaz2wsx' with grant option;"
flush="flush privileges;"

sqls=("${creat_cmf}" "${creat_hive}" "${creat_oozie}" "${creat_hue}" "${creat_amon}" "${access_auth_cmf}" "${access_auth_hive}" "${access_auth_oozie}" "${access_auth_hue}" "${access_auth_amon}" "${access_auth_remote}" "${flush}")

for i in "${sqls[@]}"
do
  mysql -uroot -p1qaz2wsx -e "${i}";
done

mysql配置文件的ansible模板:my.cnf.j2

[mysqld]

skip-name-resolve
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
datadir         = /var/lib/mysql
#log-error      = /var/log/mysql/error.log
# By default we only accept connections from localhost
#bind-address   = 127.0.0.1
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

server-id=1
log-bin=mysql-bin
binlog_format = ROW
binlog_row_image = full
max_binlog_size = 1G
max_allowed_packet = 2G
log_timestamps=SYSTEM
wait_timeout=2880000
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=6442450944
max_allowed_packet =67108864

default-time_zone = '+8:00'
character-set-server=utf8
character-set-server=utf8
max_connections = 3000
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

#[client]
#default-character-set=utf8

[mysql]
#default-character-set=utf8

chrony配置文件的ansible模板:chrony.conf.j2

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
{% if ntp_server is none or not ntp_server %}
server {{groups['scm_server'][0]}} iburst
{% else %}
server {{ntp_server}} iburst
{% endif %}
# Record the rate at which the system clock gains/losses time.

driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16
{% if ntp_server is none or not ntp_server %}
{% if ansible_host == groups['scm_server'][0] %}
{% set list1 = ansible_host.split('.') %}
allow {{list1[0]}}.{{list1[1]}}.{{list1[2]}}.0/24
local stratum 10
{% endif %}
{% endif %}

# Serve time even if not synchronized to a time source.
#local stratum 10
# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

部署服务

登录首页后可以按如下操作部署一个示例集群。

步骤:

  • 勾选接受许可条款,点击继续
    勾选接受许可条款,点击继续

  • 选择免费,点击继续
    在这里插入图片描述

  • 点击继续
    在这里插入图片描述

  • 输入节点IP,以逗号分割。点击搜索
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-mSZU3R97-1644827636714)(pictures/Snap2.jpg)]

  • 勾选主机,点击继续
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-U0RjQD1P-1644827636715)(pictures/Snap3.jpg)]
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IDQ0wLYT-1644827636716)(pictures/Snap4.jpg)]

  • 其它parcel勾选kafka,点击继续
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Fczd79vB-1644827636717)(pictures/Snap5.jpg)]
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ndJxg5Rz-1644827636717)(pictures/Snap6.jpg)]

  • 待parcel安装完毕,检查主机完成后,点击完成
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0MrzzABW-1644827636718)(pictures/Snap7.jpg)]
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tGtJKfht-1644827636718)(pictures/Snap8.jpg)]
    在这里插入图片描述

  • 集群安装勾选自定义服务、选择服务(如图),点击继续
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rVs0mDcE-1644827636720)(pictures/Snap10.jpg)]
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-D3Gb9X8E-1644827636720)(pictures/Snap-csum.jpg)]

  • 在集群设置中,选择服务将要分发的节点,图中以6个节点做示范。
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-77mFRAgS-1644827636721)(pictures/hadoopServers1.jpg)]
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-yxhDQEwD-1644827636722)(pictures/hadoopServers2.jpg)]

  • 输入相关数据库和用户名称(其中navigation的服务都使用cmf数据库,
    用户是cmf。其它服务的用户和数据库如图),测试连接,点击继续
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rX8VGKZY-1644827636723)(pictures/Snap14.jpg)]

  • 将所有数据目录修改到挂载的数据卷下,此处是/opt目录下。
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-71gGWbm4-1644827636723)(pictures/Snap15.jpg)]
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-bvmsrTMP-1644827636724)(pictures/Snap16.jpg)]
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LxplMaex-1644827636724)(pictures/Snap17.jpg)]

  • 点击继续,然后查看部署结果。
    如果部署中止或失败,查看错误报告解决相关问题。

本文属作者原创,转载请注明出处。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

【CentOS7离线ansible-playbook自动化安装CDH5.16(内附离线安装包地址,及自动化脚本)】 的相关文章

随机推荐

  • lv双肩包尺寸对照表_lv十大经典款包包推荐,只要抓住,永不会过时!

    LV作为一个历史悠久又相对比较入门级的奢侈品牌 不知道多少菇凉的第一只大牌包是它呢 尤其是一只经典的LV老花包 就好像每次提到Louis Vuitton 脑海里最先浮现的 一定是老花图案 可能这就是所谓的老花情节 小编也一样 给自己买的第一
  • Android Jetpack Compose 用计数器demo理解Compose UI 更新的关键-------状态管理(State)

    目录 概述 1 什么是状态 2 什么是单向数据流 3 理解Stateless和Stateful 4 使用Compose实现一个计数器 4 1 实现计数器 4 2 增加组件复用性 状态上提 总结 概述 我们都知道了Compose使用了声明式的
  • CSS 禁止复制文本

    box webkit touch callout none webkit user select none khtml user select none moz user select none ms user select none us
  • Qt QML第五章第四节:Components组件

    Components 组件 A component is a reusable element QML provides different ways to create components Currently we will look
  • Qt定制化安装包工具

    Qt定制化安装包 文章目录 Qt定制化安装包 Qt定制安装版1 0 0 0 简述 效果图 1 一键式脚本生成安装包build bat 2 安装界面 3 安装中 4 完成安装 6 安装结果 安装目录 D Program Files x86 Y
  • VMare出现无法打开虚拟机,是否移除

    当你打开虚拟机的时候出现下面的样式的时候 有以下几种情况 原因 1 可能原因是你把虚拟机的目录删除了或者是移动了 2 可能是是标签栏上开了太多的虚拟机了 Vmare 对这个打开标签是有限制的 把他打开虚拟机的标签删掉一些就好了
  • Open3D 点云中值滤波

    目录 一 算法原理 1 中值滤波 2 参考文献 二 代码实现 三 结果展示 四 参考链接 本文由CSDN点云侠原创 原文链接 如果你不是在点云侠的博客中看到该文章 那么此处便是不要脸的爬虫 一 算法原理 1 中值滤波 中值滤波的方法是 对待
  • (文末送18本ChatGPT扫盲书)从一路高歌到遭多国“封杀”,ChatGPT未来将是什么样子?

    文末一口气赠书18本 这次就让你high个够 人工智能技术的发展已经逐渐改变了我们的生活和工作方式 其中 语言模型技术是近年来关注度很高的一个领域 在这个领域 ChatGPT是一个备受瞩目的产品 它不仅是一个聊天程序 更是一个能够产生具有连
  • 【ACL2022】有关dialogue论文的汇总

    加粗的论文是有关于任务型对话系统的 部分统计 还有不全的地方 主会 Long paper Achieving Conversational Goals with Unsupervised Post hoc Knowledge Injecti
  • 各种解码网站

    xssee http web2hack org xssee xssee http evilcos me lab xssee 程默的博客 DES 3DES AES RC Blowfish Twofish Serpent Gost Rijnda
  • msvcr120.dll丢失的修复方法分享,教你如何快速解决msvcr120.dll缺失问题

    你知道msvcr120 dll是什么文件么 知道它为啥经常会丢失么 今天我们就来讲解一下msvcr120 dll这个文件 给大家分享msvcr120 dll丢失的修复方法 一 msvcr120 dll是啥文件有什么用 msvcr120 dl
  • 时间数据可视化

    目录 时间序列概念 1 时间序列数据分类 2 时间序列数据可视化的作用 连续型数据 2 折线图 3 阶梯图 离散型时间序列数据 1 柱形图 2 分组柱形图 3 堆叠柱形图 4 散点图 具体操作 1 阶梯图 2 折线图 3 拟合曲线 4 散点
  • 小白上路~ element-vue 根据用户角色改变左侧导航栏

    使用 Element UI 做左侧导航栏时 可以让后台管理端的功能模块一目了然 但是管理员角色不止一种怎么办 难道我要写很多个代码类似的后台管理系统吗 答案当然是 不 跟着我来做一个可以根据角色来显示不同导航栏的后台管理系统吧 1 数据库表
  • Microsoft Edge安装

    Windows10 ltsc 安装Edge 联网安装 下载地址可以百度搜索edge 官网下载地址为 https www microsoft com zh cn edge 这只是个安装程序 打开后会下载真正的浏览器程序并安装 离线下载地址 在
  • SpringCloud开启熔断Hystrix相关注解@EnableCircuitBreaker/@SpringCloudApplication/@EnableHystrix

    很多视频教程和学习资料都是用的老版本来教学 因为互联网更新迭代太快 在这记录一下靠前版本所用 依赖 服务提供者
  • 后端如何解决跨域问题

    为什么会产生跨域 同一协议 http https 同一ip 同一端口 8080 8081 三同中有一个不同就产生了跨域 后端如何解决跨域问题 方法一 新建跨域配置文件 Configuration public class CorsConfi
  • OpenART mini 控制舵机

    OpenART mini 控制舵机 基本介绍 舵机的分类 代码呈现 PWM控制 PWM py 单个舵机代码 运行结果 整合代码 运行结果 两个舵机 代码 项目效果呈现 基本介绍 最近在做智能车 用的语言是python 做识别动物水果数字等
  • microsoft runtime dll_完美解决api-ms-win-crt-runtime-l1-1-0.dll 丢失问题

    病状 win8 win7系统经常出现软件不运行 提示 api ms win crt runtime l1 1 0 dll 丢失 下载安装即可解决
  • 服务器memcache清理缓存的方法

    首先打开cmd窗口 输入一下命令清除memcached缓存 1 连接 telnet 127 0 0 1 11214 2 查看状态 stats 3 清除缓存 flush all 显示ok以后 缓存就清理成功啦 4 退出memcache qui
  • 【CentOS7离线ansible-playbook自动化安装CDH5.16(内附离线安装包地址,及自动化脚本)】

    CentOS7 离线环境 使用ansible自动部署CDH5 16 前言 本文介绍如何使用作者开发的自动化脚本 离线部署cdh集群 只需要简单的配置下yum源和cdh集群节点IP等几个参数 就可实现一键部署cdh集群 省去配置mysql n