天天看点

gluster文件锁的实现

  1.          主要阅读了服务端Locks xlators的实现,由于是分布式系统,client应该对锁应该也有一定的处理。还未深入了解,后续进一步研究后进行补充。

1.1    数据结构

  1. 锁的类型:__posix_lock/__pl_inode_lock/__entry_lock,成员比较类似,重要的数据成员(最后三个成员用来标记加锁的源:客户端和进程):
  • struct list_head   list;   插入Inode的ext_list
  • short  fl_type;
  • off_t  fl_start;
  • off_t  fl_end;
  • short blocked:  锁是否在阻塞状态
  • fd_num;    fd对象的指针值转换为ulong类型
  • fd_t* fd;
  • blkd_time;   进入blkd list的时间
  • granted_time; 进入active list的时间
  • transport;     表示客户端
  • owner;
  • client_pid;    客户端进程PID
  1. Inode数据结构__pl_inode:
      • mutex;
      • dom_list;      主机列表
      • ext_list;      fcntl锁列表
      • rw_list;       等待的读写请求
      • reservelk_list;       
      • blocked_reservelks;       
      • blocked_calls; 
      • mandatory;     
      • refkeeper;    

1.2    加锁

pl_lk

  1. pl_inode_get:从inode对象的ctx中获取pl_inode_t对象,如果没有,创建,并__inode_ctx_put到Inode对象中
  2. new_posix_lock创建posix_lock_t对象,赋值transport/owner等,如果参数flock的l_len==0,设置fl_end=LLONG_MAX
  3. pl_setlk
  • __is_lock_grantable:遍历pl_inode的ext_list,判断锁的范围是否overlap,再判断锁的own是否一致(same_owner比较transport和owner),如果overlog且own不一致,返回否

pl_send_prelock_unlock:(Send unlock before the actual lock to prevent lockupgrade / downgrade problems only if: - it is a blocking call - it has otherconflicting locks)(can_block &&!(__is_lock_grantable(pl_inode, lock))时调用)

  • 如果锁可以执行,__insert_and_merge:合并或拆分锁的范围,插入锁
  • 如果锁不能执行,并且can_block为true,设置lock->blocked = 1,插入inode的ext_list
  • grant_blocked_locks (this, pl_inode);
  • do_blocked_rw (pl_inode);

1.3    解锁

pl_flush

  1. 如果调用栈参数frame->root->lk_owner.len==0(客户端失去链接,该客户端打开的所有fd),调用delete_locks_of_fd(删除锁,调用do_blocked_rw从pl_inode->rw_list重新取出读写调用,resume),退出函数
  2. __delete_locks_of_owner:删除该owner的锁(trans/lk_owner)
  3. grant_blocked_locks
  4. do_blocked_rw

delete_locks_of_fd

  1. 遍历pl_inode->ext_list,删除该fd的所有的锁,根据l->blocked判断,如果是被阻塞的锁,还需要STACK_UNWIND错误码eagain
  2. grant_blocked_locks:
  • 调用__grant_blocked_locks遍历pl_inode->ext_list中被阻塞的锁,first_overlap(遍历ext_list)判断加锁区域是否交叉,如果没有,加入tmp_list
  • 遍历tmp_list,__is_lock_grantable再次判断锁是否可以执行,如果可以调用__insert_and_merge插入并合并锁

   3. do_blocked_rw

         从pl_inode->rw_list重新取出读写调用,__rw_allowable判断可以恢复,调用call_resume恢复读写调用

1.4    检查锁

__rw_allowable:do_blocked_rw/pl_readv/pl_writev调用到该方法

  1. 调用locks_overlap判断读写范围和Inode上锁的返回是否冲突
  2. same_owner判断锁的owner和当前读写的owner是否一致
  3. 判断锁的类型(读、写)和操作读写类型(GF_FOP_READ)

如果检查失败,如果操作是O_NONBLOCK,直接回复eagain,否则生成pl_rw_req_t对象,插入pl_inode的rw_list

1.5 Client维护锁

  1. client3_3_lk_cbk(client-rpc-fops.c)通过调用client_add_lock_for_recovery在client端建立锁,能够在server重启时恢复锁(目前代码已经被注释)
  2. client_add_lock_for_recover调用client_setlk(Client_lk.c)在client端建立锁(__insert_and_merge)

1.6   锁owner

  1. Gluster内部维护的owner的数据结构:

    typedef struct gf_lkowner_ {

            int  len;

            chardata[GF_MAX_LOCK_OWNER_LEN];

    } gf_lkowner_t;

  2. Fuse的owner定义,set_lk_owner_from_uint64转换成gluster内部定义的

struct fuse_lk_in {

         __u64       fh;

         __u64       owner;

         structfuse_file_lock lk;

         __u32       lk_flags;

         __u32       padding;

};

static inline void

set_lk_owner_from_uint64 (gf_lkowner_t*lkowner, uint64_t data)

{

       int i = 0;

       int j = 0;

       lkowner->len = 8;

       for (i = 0, j = 0; i < lkowner->len; i++, j += 8) {

                lkowner->data[i] =  (char)((data >> j) & 0xff);

       }

}

  1. set_lk_owner_from_ptr

static inline void

set_lk_owner_from_ptr (gf_lkowner_t*lkowner, void *data)

{

       int i = 0;

       int j = 0;

       lkowner->len = sizeof (unsigned long);

       for (i = 0, j = 0; i < lkowner->len; i++, j += 8) {

                lkowner->data[i] =  (char)((((unsigned long)data) >> j)& 0xff);

       }

}

 

继续阅读