天天看點

ansible jinjia模闆、委派與角色一、Ansible delegate二、Ansible Vault 加密三、Ansible jinjar2四、Ansible Roles

文章目錄

  • 一、Ansible delegate
    • 1.1 什麼是Task委派
    • 1.2 TASK委派實踐
    • 1.3 建立普通使用者管理ansible
    • 1.4 ansible + haproxy實作代碼滾動釋出
  • 二、Ansible Vault 加密
    • 2.1 ansible vault 介紹
    • 2.2 ansible cault 應用
  • 三、Ansible jinjar2
    • 3.1 什麼是jinja2
    • 3.2 Ansible如何使用jinja2
    • 3.3 jinja模闆基本文法
    • 3.4 jinja模闆邏輯關系
    • 3.5 jinjar2生成nginx配置檔案
    • 3.6 Jinja2生成haproxy配置檔案
    • 3.7 jinjar2 生成不同的keepalived配置檔案
  • 四、Ansible Roles
    • 4.1 Roles基本概述
    • 4.2 Roles目錄結構
    • 4.3 Roles依賴關系
    • 4.4 Roles編寫思路
    • 4.5 Roles部署NFS
    • 4.6 Roles部署 rsync

一、Ansible delegate

1.1 什麼是Task委派

簡單來說,就是本來需要在目前 “被控制端主機” 執行的操作,被委派給其他主機執行。

1.2 TASK委派實踐

場景說明:

1.為 172.16.1.7 伺服器添加一條 hosts 記錄: 1.1.1.1 aaa.com

2.同時要把這個 hosts 記錄寫一份至到 172.16.1.8 節點

3.除此任務以外 172.16.1.7 的其他任務都不會委派給 172.16.1.8 執行。

[[email protected]_62 delegate]# cat delegate1.yml 
- hosts: 172.16.1.7
 tasks:

   - name: add webserver dns
     shell: echo 1.1.1.1 aaa.com >> /etc/hosts

   - name: delegate to host 172.16.1.8
     shell: echo 1.1.1.1 aaa.com >> /etc/hosts
     delegate_to: 172.16.1.8



   - name: add webserver dns
     shell: echo 2.2.2.2 bbb.com >> /etc/hosts



           

1.3 建立普通使用者管理ansible

管理端:

1.建立使用者;

2.建立公鑰和私鑰:

被控端:

1.建立使用者;

2.接受控制端發來的公鑰資訊;

3.添加sudo權限;

[[email protected]_62 delegate]# cat delegate_user.yml
- hosts: webservers
  vars:
    - user_admin: xiaoming
    - password: $6$f6CFBj5d4J/QLCzj$SJb.acD0wJG/tQUL.sgR6eSPQ8y6h/wUF3wIzKlemXZ32v6RIp7C1i7R.9P4uuAesz1ETvN2mpVJvx7R/MI5x.
  
  tasks:
    - name: create user
      user:
        name: "{{ user_admin}}"
        password: "{{ password }}"
        generate_ssh_key: yes
        ssh_key_bits: 2048
        ssh_key_file: .ssh/id_rsa
      register: user_message
      delegate_to: localhost # 委派本機執行
      run_once: true  # 執行一次
    
    - name: output user_message
      debug:
        msg: "{{user_message.ssh_public_key}}" # 檢視公鑰


    - name: create remote user
      user:
        name: "{{ user_admin}}"
        password: "{{ password }}"
    
    - name: create directory
      file: 
        path: "/home/{{user_admin}}/.ssh"
        owner: "{{user_admin}}"
        group: "{{user_admin}}"
        mode: "0700"
        state: directory
        recurse: yes


    - name: write pub_key to remote
      copy:
        dest: "/home/{{user_admin}}/.ssh/authorized_keys"
        content: "{{user_message.ssh_public_key}}"
        owner: "{{user_admin}}"
        group: "{{user_admin}}"
        mode: "0600"


    - name: add sudo
      lineinfile:
        dest: /etc/sudoers
        line: "{{ user_admin }} ALL=(ALL) NOPASSWD:ALL"

           

如果想為不同的使用者建立賬号,隻需要參數後帶上名稱即可。

1.4 ansible + haproxy實作代碼滾動釋出

步驟:

1.首先搭建 Haproxy + web_cluster 叢集環境。
2.當 web 節點代碼需要更新時,需要下線節點,這個時候需要将下線節點的任務委派給Haproxy
3.操作 web_cluster 叢集,将新的代碼替換上
4.當 web 節點代碼更新成功後,需要上線節點,這個時候需要将上線節點的任務委派給Haproxy
5.然後依次循環,直到完成所有節點的代碼更新與替換
           

目錄結構:

[[email protected]_62 haproxy]# tree -L 1
.
├── ansible.cfg
├── haproxy.cfg.j2
├── haproxy.yml
├── hosts
├── install_haproxy.yml
├── nginx.yml
└── test.conf.j2


           
  1. nginx.yml檔案
[[email protected]_62 haproxy]# cat nginx.yml 
- hosts: webservers
  tasks: 

  - name: web_sit_code
    copy:
      content: "App version {{ ansible_eth1.ipv4.address.split('.')[-1]}}"
      dest: /opt/index.html


  - name: configure nginx
    copy:
      src: ./test.conf.j2
      dest: /etc/nginx/conf.d/test.conf
    notify: Restart nginx server



  - name: start nginx
    systemd:
      name: nginx
      state: started


  handlers:
    - name: Restart nginx server 
      systemd:
        name: nginx
        state: restarted


           

test.conf.j2

[email protected]_62 haproxy]# cat test.conf.j2 
server {
	listen 5555;
	server_name ansible.bertwu.net;
	root /opt;
	location / {
								
			index index.html;
								
	}
				
}

           
  1. install_haproxy.yml檔案:
[[email protected]_62 haproxy]# cat install_haproxy.yml 
- hosts: test
  tasks:
    
    - name: configure haproxy_cfg
      copy:
        src: ./haproxy.cfg.j2
        dest: /etc/haproxy/haproxy.cfg

      notify: Restart haproxy


    - name: start haproxy
      systemd:
        name: haproxy
        state: started


  handlers:
    - name: Restart haproxy
      systemd:
        name: haproxy
        state: restarted

           

haproxy.cfg.j2

[[email protected]_62 haproxy]# cat haproxy.cfg.j2 

global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

  
    stats socket /var/lib/haproxy/stats level admin
    nbthread 8
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000


listen haproxy-stats
	mode http
	bind *:7777
	stats enable
	stats refresh 1s 
	stats hide-version
	stats uri /haproxy?stats
	stats realm "HAProxy stats"
	stats auth admin:123456
	stats admin if TRUE


frontend web
        bind *:80
        mode http
				
		acl ansible_domain hdr_reg(host) -i ansible.bertwu.net
		use_backend web_cluster if ansible_domain

backend web_cluster
        balance roundrobin
        server 172.16.1.7 172.16.1.7:5555 check 
        server 172.16.1.8 172.16.1.8:5555 check 
        server 172.16.1.9 172.16.1.9:5555 check 
           

3.haproxy.yml檔案:

[[email protected]_62 haproxy]# cat haproxy.yml 
- hosts: webservers
  serial: 1
  tasks:

    - name: print
      debug:
        msg: "{{inventory_hostname}}"

    - name: download {{inventory_hostname}}
      haproxy:
        state: disabled
        host: '{{ inventory_hostname }}'
        socket: /var/lib/haproxy/stats
        backend: www
      delegate_to: 172.16.1.99

    - name: sleep
      shell: sleep 5


    - name: Update nginx code
      copy:
        content: "New version {{ansible_eth1.ipv4.address.split('.')[-1]}}"
        dest: /opt/index.html

    - name: upload {{inventory_hostname}}
      haproxy:
        state: enabled
        host: '{{ inventory_hostname }}'
        socket: /var/lib/haproxy/stats
        backend: web_cluster
        wait: yes
      delegate_to: 172.16.1.99

           

4.測試。略

二、Ansible Vault 加密

2.1 ansible vault 介紹

Ansible Vault可以将敏感的資料檔案進行加密,而非存放在明文的 playbooks中;比如:部分playbook内容中有明文密碼資訊,可以對其進行加密操作;後期隻有輸入對應的密碼才可以檢視、編輯或執行該檔案,如沒有密碼則無法正常運作;

2.2 ansible cault 應用

1.使用

ansible-vault-2 encrypt

對haproxy.yml檔案進行加密檔案

[[email protected]_62 haproxy]# ansible-vault
ansible-vault      ansible-vault-2    ansible-vault-2.7  
[[email protected]_62 haproxy]# ansible-vault-2 encrypt haproxy.yml 
New Vault password: 
Confirm New Vault password: 
Encryption successful


# 無權限執行
[[email protected]_62 haproxy]# ansible-playbook haproxy.yml 
ERROR! Attempting to decrypt but no vault secrets found

           

2.使用

ansible-vault view

檢視加密的檔案

[[email protected]_62 haproxy]# ansible-vault view haproxy.yml 
Vault password: 

           

3.使用

ansible-vault edit

編輯加密的檔案

[[email protected]_62 haproxy]# ansible-vault edit haproxy.yml 
Vault password:
           

4.使用

ansible-vault rekey

改變加密的檔案

[[email protected]_62 haproxy]# ansible-vault rekey haproxy.yml 
Vault password: 
New Vault password: 
Confirm New Vault password: 
Rekey successful
           

5.執行加密的yml

[[email protected]_62 haproxy]# ansible-playbook haproxy.yml --ask-vault-pass
           

6.可以指定密碼檔案,避免重複輸入密碼

[[email protected]_62 haproxy]# echo "123" >> passwd.txt
[[email protected]_62 haproxy]# ansible-vault edit haproxy.yml --vault-password=passwd.txt

           

7.執行加密的 playbook 方法如下

[[email protected]_62 haproxy]# ansible-playbook haproxy.yml --vault-password=passwd.txt
           

8.可以在在 ansible.cfg 裡新增 vault_password_file 參數,并指定密碼檔案路徑。

[[email protected]_62 haproxy]# vim ansible.cf
vault_password_file= ./passwd.txt   
           

9 使用

ansible-vault decrypt

取消密碼

[[email protected]_62 haproxy]# ansible-vault decrypt haproxy.yml
           

三、Ansible jinjar2

3.1 什麼是jinja2

  • Jinja2 是 Python 的模闆引擎
  • Ansible 需要使用 Jinja2 模闆來修改被管理主機的配置檔案。

例如:給10台主機裝上Nginx服務,但是要求每台主機的端口都不一樣,如何解決?

3.2 Ansible如何使用jinja2

ansible 使用 jinja2 模闆需要借助 template 子產品實作,那 template 子產品是用來做什麼的?

template 子產品和 copy 子產品完全一樣,都是拷貝檔案至遠端主機,差別在于template 子產品會解析要拷貝的檔案中變量的值,而 copy 則是原封不動的将檔案拷貝至被控端。

3.3 jinja模闆基本文法

  1. 要想在配置檔案中使用jinja2,playbook中的tasks必須使用template子產品。
  2. 配置檔案裡面使用變量,比如 {{ port }} 或使用 {{ facts 變量 }}

3.4 jinja模闆邏輯關系

1.循環表達式—生成nginx負載均衡,haproxy負載均衡等;

{% for i in EXPR %}
    ...
{% endfor %}
           

2.判斷表達式 --keepalived配置檔案

{% if EXPR %}
   ...
{% elif EXPR %}
   ...
{% endif%}
           

3.注釋

3.5 jinjar2生成nginx配置檔案

1.模闆配置檔案如下:

[[email protected]_62 jinjar2]# cat nginx_lb.conf.j2 
upstream webservers {
  {% for host in groups["webservers"] %}  # webservers組中的host
  server {{ host }}:{{web_port}}
  {% endfor %}				
}


server {
	listen {{ http_port }};
	server_name {{server_name}};
	location / {
	proxy_pass http://webservers;
	proxy_set_header Host $http_host;
	 }
}

           

2.nginx.yml檔案如下:

[[email protected]_62 jinjar2]# cat nginx.yml 
- hosts: webservers
  vars:
    - web_port: 80
    - http_port: 80
    - server_name: jinjar2.com

  tasks:
    - name: copy nginx configure
      template:
        src: ./nginx_lb.conf.j2
        dest: /tmp/nginx.conf

           

3.推送到目标端的配置檔案渲染如下:符合預期

upstream webservers {
    server 172.16.1.7:80
    server 172.16.1.8:80
    server 172.16.1.9:80
  				
}

server {
	listen 80;
	server_name jinjar2.com;
	location / {
	proxy_pass http://webservers;
	proxy_set_header Host $http_host;
	 }
}

           

3.6 Jinja2生成haproxy配置檔案

1.配置模闆檔案如下:

[[email protected]_62 jinjar2]# cat haproxy.cfg.j2 
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats level admin
    nbthread 8
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen haproxy-stats
	mode http
	bind *:7777
	stats enable
	stats refresh 1s 
	stats hide-version
	stats uri /haproxy?stats
	stats realm "HAProxy stats"
	stats auth admin:123456
	stats admin if TRUE


frontend web
 bind *: {{http_port}}
 mode http
				
 acl ansible_domain hdr_reg(host) -i {{ server_domain }}
 use_backend web_cluster if ansible_domain

backend web_cluster
 	balance roundrobin
 {% for host in groups['webservers'] %}
 server {{ host}} {{ host }}:{{web_cluster_port}} check
 {% endfor %}

           

2.haproxy.yml 檔案如下:

[email protected]_62 jinjar2]# cat haproxy.yml 
- hosts: lbservers
  vars:
    - http_port: 80
    - web_cluster_port: 8787
    - server_domain: ansible.bertwu.net

  tasks:
    
    - name: copy haproxy configure
      template:
        src: ./haproxy.cfg.j2 
        dest: /tmp

           

3.目标端渲染結果如下:

[[email protected] tmp]# vim haproxy.cfg.j2 

    option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen haproxy-stats
        mode http
        bind *:7777
        stats enable
        stats refresh 1s
        stats hide-version
        stats uri /haproxy?stats
        stats realm "HAProxy stats"
        stats auth admin:123456
        stats admin if TRUE


frontend web
 bind *: 80
 mode http

 acl ansible_domain hdr_reg(host) -i ansible.bertwu.net
 use_backend web_cluster if ansible_domain

backend web_cluster
        balance roundrobin
  server 172.16.1.7 172.16.1.7:8787 check
  server 172.16.1.8 172.16.1.8:8787 check
  server 172.16.1.9 172.16.1.9:8787 check

           

3.7 jinjar2 生成不同的keepalived配置檔案

方式1: 準備兩個配置檔案,然後為不同的主機推送不同的配置檔案

判斷:
	keepalived.conf
	A:Master;
	B:Slave;
	
	1.準備兩個配置檔案 keepalived-master.conf   keepalived-backup.conf;
	- name: 
	  copy:
	    src: keepalived-master.conf
		dest: /etc/keepalived.conf
	  when: ( ansible_hostname is match ("proxy01") )
	
	- name: 
	  copy:
	    src: keepalived-backup.conf
		dest: /etc/keepalived.conf
	  when: ( ansible_hostname is match ("proxy02") )

           

方式2: 準備一個配置檔案,為每個主機設定相同的變量,不同的值;

[lbservers]
	172.16.1.5 state=MASTER 
	172.16.1.6 state=BACKUP
	
	keepalived.conf
	  state {{ state }}
           

3.通過jinja判斷來實作,不需要設定任何的變量;

[[email protected] ~]# cat /etc/keepalived/keepalived.conf 
global_defs {     
    router_id {{ ansible_hostname }}                  # 目前實體裝置的辨別名稱
}

vrrp_instance VI_1 {
  {% if ansible_hostname == "proxy01" %}
    state MASTER                    # 角色狀态;
    priority 200                    # 目前實體節點在虛拟路由中的優先級;

  {% elif ansible_hostname == "proxy02" %}
    state BACKUP                    # 角色狀态;
    priority 100                    # 目前實體節點在虛拟路由中的優先級;
  {% endif %}
  
	interface eth0 eth1             # 綁定目前虛拟路由使用的實體接口;
	virtual_router_id 50            # 目前虛拟路由辨別,VRID;
    advert_int 3                    # vrrp通告時間間隔,預設1s;
    authentication {
        auth_type PASS              # 密碼類型,簡單密碼;
        auth_pass 1111              # 密碼不超過8位字元;
    }
    virtual_ipaddress {
        10.0.0.100  dev eth0 lable eth0:0      # VIP位址
    }
}
           

四、Ansible Roles

4.1 Roles基本概述

Roles是組織playbook最好的一種方式,它基于一個已知的檔案結構,去自動的加載 vars,tasks 以及handlers, 以便 playbook 更好的調用。roles 相比playbook 的結構更加的清晰有層次,但 roles 要比playbook 稍微麻煩一些;

比如:安裝任何軟體都需要先安裝時間同步服務,那麼每個 playbook 都要編寫時間同步服務的task,會顯得整個配置比較臃腫,且難以維護;

如果使用Roles我們則可以将時間同步服務 task 任務編寫好,等到需要使用的時候進行調用就行了,減少重複編寫task帶來的檔案臃腫;

4.2 Roles目錄結構

roles 官方目錄結構,必須按如下方式定義。在每個目錄中必須有 main.yml 檔案

[[email protected]_62 roles]# mkdir web/{vars,tasks,templates,handlers,files,meta} -p
[[email protected]_62 roles]# tree
.
└── web             # 角色名稱
    ├── files       # 檔案存放目錄
    ├── handlers    # 觸發任務
    ├── meta        # 依賴關系
    ├── tasks       # 任務
    ├── templates   # 模闆檔案
    └── vars        # 變量
           

4.3 Roles依賴關系

roles 允許在使用時自動引入其他 role,role依賴關系存儲在 meta/main.yml 檔案中。

例如: 安裝 wordpress 項目時:

1.需要先確定 nginx 與 php-fpm 的 role都能正常運作

2.然後在 wordpress 的 role 中定義,依賴關系

3.依賴的 role 有 nginx 以及 php-fpm

#wordpress依賴nginx與php-fpm的role
[[email protected] playbook]# cat
/root/roles/wordpress/meta/main.yml
---
dependencies:
- { role: nginx }
- { role: php-fpm }
           

wordpress 的 role 會先執行 nginx、php-fpm 的role,最後在執行wordpress 本身

4.4 Roles編寫思路

1.建立 roles 目錄結構,手動建立或使用 ansible-galaxy role init test

2.編寫 roles 的功能,也就是 tasks

3.最後 playbook 引用 roles 編寫好的 tasks

4.5 Roles部署NFS

1.建立nfs-server角色

[email protected] roles]# mkdir nfs-server/{tasks,templates,handlers,files} -p
[[email protected] roles]# mkdir group_vars
[[email protected] roles]# touch group_vars/all

# 檔案目錄結構
[[email protected] roles]# tree
.
├── group_vars
│   └── all
└── nfs-server
    ├── files
    ├── handlers
    ├── tasks
    └── templates

           

2.編寫任務

[[email protected] roles]# tree
.
├── ansible.cfg
├── group_vars  # 變量
│   └── all   
├── hosts
├── nfs-server  # 角色
│   ├── files
│   ├── handlers
│   │   └── main.yml
│   ├── tasks
│   │   └── main.yml
│   └── templates
│       └── exports.j2
└── top.yml

           

各個檔案内容如下:

# 可以為不同叢集寫多個hosts
[[email protected] roles]# cat top.yml 
- hosts: test
  roles:
    - role: nfs-server



[[email protected] roles]# cat nfs-server/tasks/main.yml 
- name: install nfs server
  yum:
    name: nfs-utils
    state: present


- name: create group
  group:
    name: "{{nfs_group}}"
    gid: "{{nfs_gid}}"
    state: present

- name: create user
  user:
    name: "{{nfs_user}}"
    uid: "{{nfs_uid}}"
    state: present
    shell: /sbin/nologin
    create_home: no

- name: create nfs share dir
  file:
    path: "{{nfs_share_dir}}"
    owner: "{{nfs_user}}"
    group: "{{nfs_group}}"
    state: directory
    recurse: yes


- name: configure nfs server
  template:
    src: exports.j2
    dest: /etc/exports

  notify: restat nfs server



- name: start nfs server
  systemd:
    name: nfs
    state: started
    enabled: yes



[[email protected] roles]# cat nfs-server/handlers/main.yml 
- name: restat nfs server
  systemd:
    name: nfs
    state: restarted


[[email protected] roles]# cat nfs-server/templates/exports.j2 
# /ansible_data 172.16.1.0/24(rw,sync,all_squash,anonuid=6666,anongid=666)

{{nfs_share_dir}} {{nfs_allow_ip_range}}(rw,sync,all_squash,anonuid={{nfs_uid}},anongid={{nfs_gid}})



[[email protected] roles]# cat group_vars/all 
nfs_share_dir: /nfs_share_data
nfs_allow_ip_range: 172.16.1.0/24
nfs_uid: 6666
nfs_gid: 6666
nfs_user: www
nfs_group: www


           

4.6 Roles部署 rsync

1.建立rsync-server 角色

[email protected] roles]# mkdir rsync-server/{tasks,templates,handlers,files} -p
目錄結構如下:
[[email protected] roles]# tree 
.
├── ansible.cfg
├── group_vars
│   └── all
├── hosts
├── rsync-server
│   ├── files
│   ├── handlers
│   │   └── main.yml
│   ├── tasks
│   │   └── main.yml
│   └── templates
│       └── rsyncd.conf.j2
└── top.yml
           

各個目錄檔案内容如下:

[[email protected] roles]# cat top.yml 
- hosts: test
  roles:
    #- role: nfs-server
    - role: rsync-server


[[email protected] roles]# cat rsync-server/tasks/main.yml 
- name: install rsync server
  yum:
    name: rsync
    state: present

- name: create group
  group: 
    name: "{{rsync_group}}"
    state: present
    system: yes


- name: create rsync user
  user:
    name: "{{rsync_user}}"
    group: "{{rsync_group}}"
    system: yes




- name: copy vartual_user passwd file
  copy:
    content: "{{rsync_virtual_user}}:123"
    dest: "{{rsync_virtual_path}}"
    mode: 0600
- name: create rsync_module_name dir
  file:
    path: "/{{rsync_module_name}}"
    owner: "{{rsync_user}}"
    group: "{{rsync_group}}"
    state: directory
    recurse: yes


- name: configure rsync
  template:
    src: rsyncd.conf.j2
    dest: /etc/rsyncd.conf
  notify: restart rsyncd server



- name: start rsyncd server
  systemd:
    name: rsyncd
    state: started
    enabled: yes


[[email protected] roles]# cat rsync-server/templates/rsyncd.conf.j2 
uid = {{rsync_user}}
gid = {{rsync_group}}
port = {{rsync_port}}
fake super = yes
use chroot = no
max connections = {{rsync_max_conn}}
timeout = 600
ignore errors
read only = false
list = true
log file = /var/log/rsyncd.log
auth users = {{rsync_virtual_user}}
secrets file = {{ rsync_virtual_path}}
[{{rsync_module_name}}]
path = /{{rsync_module_name}}


[[email protected] roles]# cat rsync-server/handlers/main.yml 
- name: restart rsyncd server
  systemd:
    name: rsyncd
    state: restarted



[[email protected] roles]# cat group_vars/all 

# nfs
nfs_share_dir: /nfs_share_data
nfs_allow_ip_range: 172.16.1.0/24
nfs_uid: 6666
nfs_gid: 6666
nfs_user: www
nfs_group: www


# rsync
rsync_user: rsync
rsync_group: rsync
rsync_port: 873
rsync_max_conn: 200
rsync_virtual_user: rsync_backup
rsync_virtual_path: /etc/rsync.passwd
rsync_module_name: backup