天天看点

ansible jinjia模板、委派与角色一、Ansible delegate二、Ansible Vault 加密三、Ansible jinjar2四、Ansible Roles

文章目录

  • 一、Ansible delegate
    • 1.1 什么是Task委派
    • 1.2 TASK委派实践
    • 1.3 创建普通用户管理ansible
    • 1.4 ansible + haproxy实现代码滚动发布
  • 二、Ansible Vault 加密
    • 2.1 ansible vault 介绍
    • 2.2 ansible cault 应用
  • 三、Ansible jinjar2
    • 3.1 什么是jinja2
    • 3.2 Ansible如何使用jinja2
    • 3.3 jinja模板基本语法
    • 3.4 jinja模板逻辑关系
    • 3.5 jinjar2生成nginx配置文件
    • 3.6 Jinja2生成haproxy配置文件
    • 3.7 jinjar2 生成不同的keepalived配置文件
  • 四、Ansible Roles
    • 4.1 Roles基本概述
    • 4.2 Roles目录结构
    • 4.3 Roles依赖关系
    • 4.4 Roles编写思路
    • 4.5 Roles部署NFS
    • 4.6 Roles部署 rsync

一、Ansible delegate

1.1 什么是Task委派

简单来说,就是本来需要在当前 “被控制端主机” 执行的操作,被委派给其他主机执行。

1.2 TASK委派实践

场景说明:

1.为 172.16.1.7 服务器添加一条 hosts 记录: 1.1.1.1 aaa.com

2.同时要把这个 hosts 记录写一份至到 172.16.1.8 节点

3.除此任务以外 172.16.1.7 的其他任务都不会委派给 172.16.1.8 执行。

[[email protected]_62 delegate]# cat delegate1.yml 
- hosts: 172.16.1.7
 tasks:

   - name: add webserver dns
     shell: echo 1.1.1.1 aaa.com >> /etc/hosts

   - name: delegate to host 172.16.1.8
     shell: echo 1.1.1.1 aaa.com >> /etc/hosts
     delegate_to: 172.16.1.8



   - name: add webserver dns
     shell: echo 2.2.2.2 bbb.com >> /etc/hosts



           

1.3 创建普通用户管理ansible

管理端:

1.创建用户;

2.创建公钥和私钥:

被控端:

1.创建用户;

2.接受控制端发来的公钥信息;

3.添加sudo权限;

[[email protected]_62 delegate]# cat delegate_user.yml
- hosts: webservers
  vars:
    - user_admin: xiaoming
    - password: $6$f6CFBj5d4J/QLCzj$SJb.acD0wJG/tQUL.sgR6eSPQ8y6h/wUF3wIzKlemXZ32v6RIp7C1i7R.9P4uuAesz1ETvN2mpVJvx7R/MI5x.
  
  tasks:
    - name: create user
      user:
        name: "{{ user_admin}}"
        password: "{{ password }}"
        generate_ssh_key: yes
        ssh_key_bits: 2048
        ssh_key_file: .ssh/id_rsa
      register: user_message
      delegate_to: localhost # 委派本机执行
      run_once: true  # 执行一次
    
    - name: output user_message
      debug:
        msg: "{{user_message.ssh_public_key}}" # 查看公钥


    - name: create remote user
      user:
        name: "{{ user_admin}}"
        password: "{{ password }}"
    
    - name: create directory
      file: 
        path: "/home/{{user_admin}}/.ssh"
        owner: "{{user_admin}}"
        group: "{{user_admin}}"
        mode: "0700"
        state: directory
        recurse: yes


    - name: write pub_key to remote
      copy:
        dest: "/home/{{user_admin}}/.ssh/authorized_keys"
        content: "{{user_message.ssh_public_key}}"
        owner: "{{user_admin}}"
        group: "{{user_admin}}"
        mode: "0600"


    - name: add sudo
      lineinfile:
        dest: /etc/sudoers
        line: "{{ user_admin }} ALL=(ALL) NOPASSWD:ALL"

           

如果想为不同的用户创建账号,只需要参数后带上名称即可。

1.4 ansible + haproxy实现代码滚动发布

步骤:

1.首先搭建 Haproxy + web_cluster 集群环境。
2.当 web 节点代码需要更新时,需要下线节点,这个时候需要将下线节点的任务委派给Haproxy
3.操作 web_cluster 集群,将新的代码替换上
4.当 web 节点代码更新成功后,需要上线节点,这个时候需要将上线节点的任务委派给Haproxy
5.然后依次循环,直到完成所有节点的代码更新与替换
           

目录结构:

[[email protected]_62 haproxy]# tree -L 1
.
├── ansible.cfg
├── haproxy.cfg.j2
├── haproxy.yml
├── hosts
├── install_haproxy.yml
├── nginx.yml
└── test.conf.j2


           
  1. nginx.yml文件
[[email protected]_62 haproxy]# cat nginx.yml 
- hosts: webservers
  tasks: 

  - name: web_sit_code
    copy:
      content: "App version {{ ansible_eth1.ipv4.address.split('.')[-1]}}"
      dest: /opt/index.html


  - name: configure nginx
    copy:
      src: ./test.conf.j2
      dest: /etc/nginx/conf.d/test.conf
    notify: Restart nginx server



  - name: start nginx
    systemd:
      name: nginx
      state: started


  handlers:
    - name: Restart nginx server 
      systemd:
        name: nginx
        state: restarted


           

test.conf.j2

[email protected]_62 haproxy]# cat test.conf.j2 
server {
	listen 5555;
	server_name ansible.bertwu.net;
	root /opt;
	location / {
								
			index index.html;
								
	}
				
}

           
  1. install_haproxy.yml文件:
[[email protected]_62 haproxy]# cat install_haproxy.yml 
- hosts: test
  tasks:
    
    - name: configure haproxy_cfg
      copy:
        src: ./haproxy.cfg.j2
        dest: /etc/haproxy/haproxy.cfg

      notify: Restart haproxy


    - name: start haproxy
      systemd:
        name: haproxy
        state: started


  handlers:
    - name: Restart haproxy
      systemd:
        name: haproxy
        state: restarted

           

haproxy.cfg.j2

[[email protected]_62 haproxy]# cat haproxy.cfg.j2 

global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

  
    stats socket /var/lib/haproxy/stats level admin
    nbthread 8
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000


listen haproxy-stats
	mode http
	bind *:7777
	stats enable
	stats refresh 1s 
	stats hide-version
	stats uri /haproxy?stats
	stats realm "HAProxy stats"
	stats auth admin:123456
	stats admin if TRUE


frontend web
        bind *:80
        mode http
				
		acl ansible_domain hdr_reg(host) -i ansible.bertwu.net
		use_backend web_cluster if ansible_domain

backend web_cluster
        balance roundrobin
        server 172.16.1.7 172.16.1.7:5555 check 
        server 172.16.1.8 172.16.1.8:5555 check 
        server 172.16.1.9 172.16.1.9:5555 check 
           

3.haproxy.yml文件:

[[email protected]_62 haproxy]# cat haproxy.yml 
- hosts: webservers
  serial: 1
  tasks:

    - name: print
      debug:
        msg: "{{inventory_hostname}}"

    - name: download {{inventory_hostname}}
      haproxy:
        state: disabled
        host: '{{ inventory_hostname }}'
        socket: /var/lib/haproxy/stats
        backend: www
      delegate_to: 172.16.1.99

    - name: sleep
      shell: sleep 5


    - name: Update nginx code
      copy:
        content: "New version {{ansible_eth1.ipv4.address.split('.')[-1]}}"
        dest: /opt/index.html

    - name: upload {{inventory_hostname}}
      haproxy:
        state: enabled
        host: '{{ inventory_hostname }}'
        socket: /var/lib/haproxy/stats
        backend: web_cluster
        wait: yes
      delegate_to: 172.16.1.99

           

4.测试。略

二、Ansible Vault 加密

2.1 ansible vault 介绍

Ansible Vault可以将敏感的数据文件进行加密,而非存放在明文的 playbooks中;比如:部分playbook内容中有明文密码信息,可以对其进行加密操作;后期只有输入对应的密码才可以查看、编辑或执行该文件,如没有密码则无法正常运行;

2.2 ansible cault 应用

1.使用

ansible-vault-2 encrypt

对haproxy.yml文件进行加密文件

[[email protected]_62 haproxy]# ansible-vault
ansible-vault      ansible-vault-2    ansible-vault-2.7  
[[email protected]_62 haproxy]# ansible-vault-2 encrypt haproxy.yml 
New Vault password: 
Confirm New Vault password: 
Encryption successful


# 无权限执行
[[email protected]_62 haproxy]# ansible-playbook haproxy.yml 
ERROR! Attempting to decrypt but no vault secrets found

           

2.使用

ansible-vault view

查看加密的文件

[[email protected]_62 haproxy]# ansible-vault view haproxy.yml 
Vault password: 

           

3.使用

ansible-vault edit

编辑加密的文件

[[email protected]_62 haproxy]# ansible-vault edit haproxy.yml 
Vault password:
           

4.使用

ansible-vault rekey

改变加密的文件

[[email protected]_62 haproxy]# ansible-vault rekey haproxy.yml 
Vault password: 
New Vault password: 
Confirm New Vault password: 
Rekey successful
           

5.执行加密的yml

[[email protected]_62 haproxy]# ansible-playbook haproxy.yml --ask-vault-pass
           

6.可以指定密码文件,避免重复输入密码

[[email protected]_62 haproxy]# echo "123" >> passwd.txt
[[email protected]_62 haproxy]# ansible-vault edit haproxy.yml --vault-password=passwd.txt

           

7.执行加密的 playbook 方法如下

[[email protected]_62 haproxy]# ansible-playbook haproxy.yml --vault-password=passwd.txt
           

8.可以在在 ansible.cfg 里新增 vault_password_file 参数,并指定密码文件路径。

[[email protected]_62 haproxy]# vim ansible.cf
vault_password_file= ./passwd.txt   
           

9 使用

ansible-vault decrypt

取消密码

[[email protected]_62 haproxy]# ansible-vault decrypt haproxy.yml
           

三、Ansible jinjar2

3.1 什么是jinja2

  • Jinja2 是 Python 的模板引擎
  • Ansible 需要使用 Jinja2 模板来修改被管理主机的配置文件。

例如:给10台主机装上Nginx服务,但是要求每台主机的端口都不一样,如何解决?

3.2 Ansible如何使用jinja2

ansible 使用 jinja2 模板需要借助 template 模块实现,那 template 模块是用来做什么的?

template 模块和 copy 模块完全一样,都是拷贝文件至远程主机,区别在于template 模块会解析要拷贝的文件中变量的值,而 copy 则是原封不动的将文件拷贝至被控端。

3.3 jinja模板基本语法

  1. 要想在配置文件中使用jinja2,playbook中的tasks必须使用template模块。
  2. 配置文件里面使用变量,比如 {{ port }} 或使用 {{ facts 变量 }}

3.4 jinja模板逻辑关系

1.循环表达式—生成nginx负载均衡,haproxy负载均衡等;

{% for i in EXPR %}
    ...
{% endfor %}
           

2.判断表达式 --keepalived配置文件

{% if EXPR %}
   ...
{% elif EXPR %}
   ...
{% endif%}
           

3.注释

3.5 jinjar2生成nginx配置文件

1.模板配置文件如下:

[[email protected]_62 jinjar2]# cat nginx_lb.conf.j2 
upstream webservers {
  {% for host in groups["webservers"] %}  # webservers组中的host
  server {{ host }}:{{web_port}}
  {% endfor %}				
}


server {
	listen {{ http_port }};
	server_name {{server_name}};
	location / {
	proxy_pass http://webservers;
	proxy_set_header Host $http_host;
	 }
}

           

2.nginx.yml文件如下:

[[email protected]_62 jinjar2]# cat nginx.yml 
- hosts: webservers
  vars:
    - web_port: 80
    - http_port: 80
    - server_name: jinjar2.com

  tasks:
    - name: copy nginx configure
      template:
        src: ./nginx_lb.conf.j2
        dest: /tmp/nginx.conf

           

3.推送到目标端的配置文件渲染如下:符合预期

upstream webservers {
    server 172.16.1.7:80
    server 172.16.1.8:80
    server 172.16.1.9:80
  				
}

server {
	listen 80;
	server_name jinjar2.com;
	location / {
	proxy_pass http://webservers;
	proxy_set_header Host $http_host;
	 }
}

           

3.6 Jinja2生成haproxy配置文件

1.配置模板文件如下:

[[email protected]_62 jinjar2]# cat haproxy.cfg.j2 
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats level admin
    nbthread 8
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen haproxy-stats
	mode http
	bind *:7777
	stats enable
	stats refresh 1s 
	stats hide-version
	stats uri /haproxy?stats
	stats realm "HAProxy stats"
	stats auth admin:123456
	stats admin if TRUE


frontend web
 bind *: {{http_port}}
 mode http
				
 acl ansible_domain hdr_reg(host) -i {{ server_domain }}
 use_backend web_cluster if ansible_domain

backend web_cluster
 	balance roundrobin
 {% for host in groups['webservers'] %}
 server {{ host}} {{ host }}:{{web_cluster_port}} check
 {% endfor %}

           

2.haproxy.yml 文件如下:

[email protected]_62 jinjar2]# cat haproxy.yml 
- hosts: lbservers
  vars:
    - http_port: 80
    - web_cluster_port: 8787
    - server_domain: ansible.bertwu.net

  tasks:
    
    - name: copy haproxy configure
      template:
        src: ./haproxy.cfg.j2 
        dest: /tmp

           

3.目标端渲染结果如下:

[[email protected] tmp]# vim haproxy.cfg.j2 

    option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen haproxy-stats
        mode http
        bind *:7777
        stats enable
        stats refresh 1s
        stats hide-version
        stats uri /haproxy?stats
        stats realm "HAProxy stats"
        stats auth admin:123456
        stats admin if TRUE


frontend web
 bind *: 80
 mode http

 acl ansible_domain hdr_reg(host) -i ansible.bertwu.net
 use_backend web_cluster if ansible_domain

backend web_cluster
        balance roundrobin
  server 172.16.1.7 172.16.1.7:8787 check
  server 172.16.1.8 172.16.1.8:8787 check
  server 172.16.1.9 172.16.1.9:8787 check

           

3.7 jinjar2 生成不同的keepalived配置文件

方式1: 准备两个配置文件,然后为不同的主机推送不同的配置文件

判断:
	keepalived.conf
	A:Master;
	B:Slave;
	
	1.准备两个配置文件 keepalived-master.conf   keepalived-backup.conf;
	- name: 
	  copy:
	    src: keepalived-master.conf
		dest: /etc/keepalived.conf
	  when: ( ansible_hostname is match ("proxy01") )
	
	- name: 
	  copy:
	    src: keepalived-backup.conf
		dest: /etc/keepalived.conf
	  when: ( ansible_hostname is match ("proxy02") )

           

方式2: 准备一个配置文件,为每个主机设定相同的变量,不同的值;

[lbservers]
	172.16.1.5 state=MASTER 
	172.16.1.6 state=BACKUP
	
	keepalived.conf
	  state {{ state }}
           

3.通过jinja判断来实现,不需要设定任何的变量;

[[email protected] ~]# cat /etc/keepalived/keepalived.conf 
global_defs {     
    router_id {{ ansible_hostname }}                  # 当前物理设备的标识名称
}

vrrp_instance VI_1 {
  {% if ansible_hostname == "proxy01" %}
    state MASTER                    # 角色状态;
    priority 200                    # 当前物理节点在虚拟路由中的优先级;

  {% elif ansible_hostname == "proxy02" %}
    state BACKUP                    # 角色状态;
    priority 100                    # 当前物理节点在虚拟路由中的优先级;
  {% endif %}
  
	interface eth0 eth1             # 绑定当前虚拟路由使用的物理接口;
	virtual_router_id 50            # 当前虚拟路由标识,VRID;
    advert_int 3                    # vrrp通告时间间隔,默认1s;
    authentication {
        auth_type PASS              # 密码类型,简单密码;
        auth_pass 1111              # 密码不超过8位字符;
    }
    virtual_ipaddress {
        10.0.0.100  dev eth0 lable eth0:0      # VIP地址
    }
}
           

四、Ansible Roles

4.1 Roles基本概述

Roles是组织playbook最好的一种方式,它基于一个已知的文件结构,去自动的加载 vars,tasks 以及handlers, 以便 playbook 更好的调用。roles 相比playbook 的结构更加的清晰有层次,但 roles 要比playbook 稍微麻烦一些;

比如:安装任何软件都需要先安装时间同步服务,那么每个 playbook 都要编写时间同步服务的task,会显得整个配置比较臃肿,且难以维护;

如果使用Roles我们则可以将时间同步服务 task 任务编写好,等到需要使用的时候进行调用就行了,减少重复编写task带来的文件臃肿;

4.2 Roles目录结构

roles 官方目录结构,必须按如下方式定义。在每个目录中必须有 main.yml 文件

[[email protected]_62 roles]# mkdir web/{vars,tasks,templates,handlers,files,meta} -p
[[email protected]_62 roles]# tree
.
└── web             # 角色名称
    ├── files       # 文件存放目录
    ├── handlers    # 触发任务
    ├── meta        # 依赖关系
    ├── tasks       # 任务
    ├── templates   # 模板文件
    └── vars        # 变量
           

4.3 Roles依赖关系

roles 允许在使用时自动引入其他 role,role依赖关系存储在 meta/main.yml 文件中。

例如: 安装 wordpress 项目时:

1.需要先确保 nginx 与 php-fpm 的 role都能正常运行

2.然后在 wordpress 的 role 中定义,依赖关系

3.依赖的 role 有 nginx 以及 php-fpm

#wordpress依赖nginx与php-fpm的role
[[email protected] playbook]# cat
/root/roles/wordpress/meta/main.yml
---
dependencies:
- { role: nginx }
- { role: php-fpm }
           

wordpress 的 role 会先执行 nginx、php-fpm 的role,最后在执行wordpress 本身

4.4 Roles编写思路

1.创建 roles 目录结构,手动创建或使用 ansible-galaxy role init test

2.编写 roles 的功能,也就是 tasks

3.最后 playbook 引用 roles 编写好的 tasks

4.5 Roles部署NFS

1.创建nfs-server角色

[email protected] roles]# mkdir nfs-server/{tasks,templates,handlers,files} -p
[[email protected] roles]# mkdir group_vars
[[email protected] roles]# touch group_vars/all

# 文件目录结构
[[email protected] roles]# tree
.
├── group_vars
│   └── all
└── nfs-server
    ├── files
    ├── handlers
    ├── tasks
    └── templates

           

2.编写任务

[[email protected] roles]# tree
.
├── ansible.cfg
├── group_vars  # 变量
│   └── all   
├── hosts
├── nfs-server  # 角色
│   ├── files
│   ├── handlers
│   │   └── main.yml
│   ├── tasks
│   │   └── main.yml
│   └── templates
│       └── exports.j2
└── top.yml

           

各个文件内容如下:

# 可以为不同集群写多个hosts
[[email protected] roles]# cat top.yml 
- hosts: test
  roles:
    - role: nfs-server



[[email protected] roles]# cat nfs-server/tasks/main.yml 
- name: install nfs server
  yum:
    name: nfs-utils
    state: present


- name: create group
  group:
    name: "{{nfs_group}}"
    gid: "{{nfs_gid}}"
    state: present

- name: create user
  user:
    name: "{{nfs_user}}"
    uid: "{{nfs_uid}}"
    state: present
    shell: /sbin/nologin
    create_home: no

- name: create nfs share dir
  file:
    path: "{{nfs_share_dir}}"
    owner: "{{nfs_user}}"
    group: "{{nfs_group}}"
    state: directory
    recurse: yes


- name: configure nfs server
  template:
    src: exports.j2
    dest: /etc/exports

  notify: restat nfs server



- name: start nfs server
  systemd:
    name: nfs
    state: started
    enabled: yes



[[email protected] roles]# cat nfs-server/handlers/main.yml 
- name: restat nfs server
  systemd:
    name: nfs
    state: restarted


[[email protected] roles]# cat nfs-server/templates/exports.j2 
# /ansible_data 172.16.1.0/24(rw,sync,all_squash,anonuid=6666,anongid=666)

{{nfs_share_dir}} {{nfs_allow_ip_range}}(rw,sync,all_squash,anonuid={{nfs_uid}},anongid={{nfs_gid}})



[[email protected] roles]# cat group_vars/all 
nfs_share_dir: /nfs_share_data
nfs_allow_ip_range: 172.16.1.0/24
nfs_uid: 6666
nfs_gid: 6666
nfs_user: www
nfs_group: www


           

4.6 Roles部署 rsync

1.创建rsync-server 角色

[email protected] roles]# mkdir rsync-server/{tasks,templates,handlers,files} -p
目录结构如下:
[[email protected] roles]# tree 
.
├── ansible.cfg
├── group_vars
│   └── all
├── hosts
├── rsync-server
│   ├── files
│   ├── handlers
│   │   └── main.yml
│   ├── tasks
│   │   └── main.yml
│   └── templates
│       └── rsyncd.conf.j2
└── top.yml
           

各个目录文件内容如下:

[[email protected] roles]# cat top.yml 
- hosts: test
  roles:
    #- role: nfs-server
    - role: rsync-server


[[email protected] roles]# cat rsync-server/tasks/main.yml 
- name: install rsync server
  yum:
    name: rsync
    state: present

- name: create group
  group: 
    name: "{{rsync_group}}"
    state: present
    system: yes


- name: create rsync user
  user:
    name: "{{rsync_user}}"
    group: "{{rsync_group}}"
    system: yes




- name: copy vartual_user passwd file
  copy:
    content: "{{rsync_virtual_user}}:123"
    dest: "{{rsync_virtual_path}}"
    mode: 0600
- name: create rsync_module_name dir
  file:
    path: "/{{rsync_module_name}}"
    owner: "{{rsync_user}}"
    group: "{{rsync_group}}"
    state: directory
    recurse: yes


- name: configure rsync
  template:
    src: rsyncd.conf.j2
    dest: /etc/rsyncd.conf
  notify: restart rsyncd server



- name: start rsyncd server
  systemd:
    name: rsyncd
    state: started
    enabled: yes


[[email protected] roles]# cat rsync-server/templates/rsyncd.conf.j2 
uid = {{rsync_user}}
gid = {{rsync_group}}
port = {{rsync_port}}
fake super = yes
use chroot = no
max connections = {{rsync_max_conn}}
timeout = 600
ignore errors
read only = false
list = true
log file = /var/log/rsyncd.log
auth users = {{rsync_virtual_user}}
secrets file = {{ rsync_virtual_path}}
[{{rsync_module_name}}]
path = /{{rsync_module_name}}


[[email protected] roles]# cat rsync-server/handlers/main.yml 
- name: restart rsyncd server
  systemd:
    name: rsyncd
    state: restarted



[[email protected] roles]# cat group_vars/all 

# nfs
nfs_share_dir: /nfs_share_data
nfs_allow_ip_range: 172.16.1.0/24
nfs_uid: 6666
nfs_gid: 6666
nfs_user: www
nfs_group: www


# rsync
rsync_user: rsync
rsync_group: rsync
rsync_port: 873
rsync_max_conn: 200
rsync_virtual_user: rsync_backup
rsync_virtual_path: /etc/rsync.passwd
rsync_module_name: backup