Centos7 ceph叢集安裝 cephfs用戶端挂載 java代碼調用
-
ceph簡介(http://docs.ceph.org.cn/ 官方文檔)
ceph分為ceph monitor,ceph manager(管理器),ceph ODS(對象存儲守護程序),ceph MDS(中繼資料伺服器)用于存儲檔案
-
ceph安裝部署
2.1部署環境
IP | 安裝服務 | 核心 | 主機名 |
---|---|---|---|
192.168.198.129 | admin-note | centos7 | admin-note |
192.168.198.130 | osd | centos7 | note2 |
192.168.198.131 | osd | centos7 | note3 |
192.168.198.132 | osd,mds | centos7 | note4 |
2.2關閉selinux和防火牆(所有節點)
#關閉防火牆
systemctl stop firewalld
systemctl disable firewalld
#關閉selinux
setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
2.3設定時間同步(所有節點)
# yum 安裝 ntp
yum install ntp ntpdate ntp-doc
# 校對系統時鐘
ntpdate 0.cn.pool.ntp.org
2.3修改主機名和hosts檔案(所有節點)
#192.168.198.129上操作
hostnamectl set-hostname admin-node
#192.168.198.130上操作
hostnamectl set-hostname node2
#192.168.198.131上操作
hostnamectl set-hostname node3
#192.168.198.132上操作
hostnamectl set-hostname node4
#所有節點上操作
vi /etc/hosts
#将以下内容增加到/etc/hosts中
192.168.198.129 admin-node
192.168.198.130 node2
192.168.198.131 node3
192.168.198.132 node4
#設定yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
yum makecache && yum update -y
2.4建立 Ceph 部署使用者并設定其遠端登入密碼(所有節點)
# 建立 ceph 特定使用者
useradd -d /home/cephd -m cephd
echo cephd | passwd cephd --stdin
# 添加 sudo 權限
echo "cephd ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephd
chmod 0440 /etc/sudoers.d/cephd
# node1、node2、node3上操作,設定cephd使用者的遠端登入密碼,記住此密碼,後面配置免密登入會用到
passwd cephd
2.5配置免密登入(admin-node節點)
# 切換到cephd使用者
su cephd
# 生成ssh密鑰,一路回車即可
sudo ssh-keygen
# 将公鑰複制到 node1 節點,輸入cephd的使用者密碼即可
sudo ssh-copy-id node1
# 将公鑰複制到 node2 節點,輸入cephd的使用者密碼即可
sudo ssh-copy-id node2
# 将公鑰複制到 node3 節點,輸入cephd的使用者密碼即可
sudo ssh-copy-id node3
2.6Ceph叢集搭建(ceph version 10.2.11)(admin-note)
配置ceph源
# yum 配置其他依賴包
sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && rm /etc/yum.repos.d/dl.fedoraproject.org*
# 添加 Ceph 源
sudo vi /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
安裝
sudo yum update && sudo yum install ceph-deploy
2.7修改 ceph-deploy 管理節點上的 ~/.ssh/config 檔案,這樣無需每次執行 ceph-deploy 都要指定 –username cephd(admin-node)
sudo vi ~/.ssh/config
Host admin-note
Hostname admin-note
User cephd
Host note2
Hostname note2
User cephd
Host note3
Hostname note3
User cephd
Host note4
Hostname note4
User cephd
sudo chmod 600 ~/.ssh/config
2.8建立叢集 (admin-note)
(1)建立叢集(在管理節點master上)
# 建立執行目錄
sudo mkdir /home/cephd/ceph-cluster && cd /home/cephd/ceph-cluster
# 建立叢集 mon節點
sudo ceph-deploy new admin-note
(2)把 Ceph 配置檔案裡的預設副本數從 3 改成 2 ,這樣隻有兩個 OSD 也可以達到 active + clean 狀态
sudo vim /home/ceph/my-cluster/ceph.conf 添加如下内容:
osd pool default size = 2
(3)安裝ceph(admin-node)
sudo ceph-deploy install admin-note note2 note3
(4)初始化 monitor 節點并收集所有密鑰(admin-node)
sudo ceph-deploy --overwrite-conf mon create-initial
(5)添加osd程序
添加兩個 OSD 。為了快速地安裝,這篇快速入門把目錄而非整個硬碟用于 OSD 守護程序。登入到 Ceph 節點、并給 OSD 守護程序建立一個目錄。在這裡插入代碼片
建立osd0的工作目錄 (在note2節點上)
sudo mkdir /var/local/osd0
建立osd1的工作目錄 (在note3節點上)
sudo mkdir /var/local/osd1
準備OSD (在admin-note節點上)
sudo ceph-deploy osd prepare note2:/var/local/osd0 note3:/var/local/osd1
激活OSD (master節點)
sudo ceph-deploy osd activate note2:/var/local/osd0 note3:/var/local/osd1
#重新開機osd systemctl restart [email protected]
#檢視狀态 ceph osd tree```
(6)用 ceph-deploy 把配置檔案和 admin 密鑰拷貝到管理節點和 Ceph 節點,這樣你每次執行 Ceph 指令行時就無需指定 monitor 位址和 ceph.client.admin.keyring 了 (master節點執行)
```java
sudo ceph-deploy admin admin-note note2 note3
#添加操作權限
sudo sudo chmod +r /etc/ceph/ceph.client.admin.keyring
#檢查健康狀态
sudo ceph health
-
cephfs部署挂載
3.1部署
(1)安裝mds(admin-note)
sudo ceph-deploy mds create note2
(2)建立儲存池(note2)
sudo ceph osd pool create cephfs_data 128
#pool 'cephfs_data' created
sudo ceph osd pool create cephfs_metadata 128
#pool 'cephfs_metadata' created
#使用fs new指令enable 檔案系統
sudo ceph fs new cephfs cephfs_metadata cephfs_data
#檢視cephfs資訊
sudo ceph fs ls
#檢視cephfs狀态
sudo ceph mds stat
3.2挂載cephfs
Cephfs有兩種挂載方式:kernel或fuse
(1)kernel核心挂載
#建立挂載目錄
sudo mkdir /mnt/mycephfs
#建立儲存admin密鑰檔案
sudo vi admin.secret
#挂載
sudo mount -t ceph 192.168.198.129:6789:/ /mnt/mycephfs -o name=admin,secretfile=/root/admin.secret
#檢視挂載
df -Th | grep cephfs
(2)fuse挂載
#先取消之前的挂載
umount /mnt/mycephfs/
#下載下傳工具包
sudo yum -y install ceph-fuse
mkdir ~/mycephfs
#建立使用者
ceph auth get-or-create client.george mon 'allow r' osd 'allow rw pool=ceph_data' -o george.keyring
#使用者權限修改
ceph auth caps client.test mon 'allow rw' osd 'allow rwx' mds 'allow rw path=/test'
ceph-fuse -n client.test -m 192.168.198.129:6789 /mnt/mycephfs1/ -r /test```
(3)設定開機自動挂載/etc/fstab
```java
id=admin,conf=/etc/ceph/ceph.conf /mnt fuse.ceph defaults 0 0
4.對象存儲安裝部署
4.1建立網關(admin-note)
```powershell
ceph-deploy rgw create note3
http://client-node:7480
ss -ntl | grep 7480
Civetweb 預設運作在端口 7480 之上.。如果想修改這個預設端口 (比如使用端口 80)比如, 如果你的主機名是 gateway-node1, 在 [global] 節後添加的節名如下:
[client.rgw.gateway-node1]
rgw_frontends = "civetweb port=80"
将該配置檔案推送到你的 Ceph 對象網關節點(也包括其他 Ceph 節點):
ceph-deploy --overwrite-conf config push <gateway-node> [<other-nodes>]
為了使新配置的端口生效,需要重新開機 Ceph 對象網關:
sudo systemctl restart ceph-radosgw.service
4.2使用網關
(1)為了使用 REST 接口,首先需要為S3接口建立一個初始 Ceph 對象網關使用者。然後,為 Swift 接口建立一個子使用者。然後你需要驗證建立的使用者是否能夠通路網關。
為 S3 通路建立 RADOSGW 使用者
一個``radosgw`` 使用者需要被建立并被配置設定權限。指令 man radosgw-admin 會提供該指令的額外資訊。為了建立使用者,在 gateway host 上執行下面的指令:
```powershell
sudo radosgw-admin user create --uid="testuser" --display-name="First User"
(2)建立一個 SWIFT 使用者
如果你想要使用這種方式通路叢集,你需要建立一個 Swift 子使用者。建立 Swift 使用者包括兩個步驟。第一步是建立使用者。第二步是建立 secret key。在``gateway host`` 上執行喜愛按的步驟:建立 Swift 使用者:
sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
#建立key
sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
#檢視使用者
sudo radosgw-admin user info --uid testuser
4.3通路驗證
(1)s3通路
為了驗證 S3 通路,你需要編寫并運作一個 Python 測試腳本。S3 通路測試腳本将連接配接 radosgw, 建立一個新的 bucket 并列出所有的 buckets。 aws_access_key_id 和 aws_secret_access_key 的值來自于指令``radosgw_admin`` 的傳回值 access_key 和 secret_key 。
執行下面的步驟:
1.你需要安裝 python-boto 包:
sudo yum install python-boto
2.建立 Python 腳本檔案:
vi s3test.py
3.将下面的内容添加到檔案中:
import boto
import boto.s3.connection
access_key = 'I0PJDPCIYZ665MW88W9R'
secret_key = 'dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = '{hostname}', port = {port},
is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)
bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
print "{name}".format(
name = bucket.name,
created = bucket.creation_date,
)
将 {hostname} 替換為你配置了網關服務的節點的主機名。比如 gateway host. 将 {port} 替換為 Civetweb 所使用的端口。
4.運作腳本:
python s3test.py
輸出類似下面的内容:
my-new-bucket 2015-02-16T17:09:10.000Z
(2)測試 SWIFT 通路
Swift 通路的驗證則可以使用
swift
的指令行用戶端。可以通過指令 man swift 擷取更多指令行選項的更多資訊。
執行下面的指令安裝 swift 用戶端,在 Red Hat Enterprise Linux上執行:
sudo yum install python-setuptools
sudo easy_install pip
##未找到pip
curl https://bootstrap.pypa.io/pip/2.7/get-pip.py -o get-pip.py
sudo python get-pip.py
#替換 上一個指令
sudo pip install --upgrade setuptools
sudo pip install --upgrade python-swiftclient
#swift調用
swift -A http://{IP ADDRESS}:{port}/auth/1.0 -U testuser:swift -K '{swift_secret_key}' list
-
java代碼調用
5.1 java代碼調用 cephfs
(1)maven引入jar包
<dependency>
<groupId>com.ceph</groupId>
<artifactId>libcephfs</artifactId>
<version>0.80.5</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.ceph/rados -->
<dependency>
<groupId>com.ceph</groupId>
<artifactId>rados</artifactId>
<version>0.3.0</version>
</dependency>
package com.ctsi.config;
import com.ceph.fs.CephFileExtent;
import com.ceph.fs.CephMount;
import com.ceph.fs.CephStat;
import lombok.SneakyThrows;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.io.*;
import java.util.Arrays;
public class CephOperateConfig {
private CephMount mount;
private String username;
private String monIp;
private String userKey;
public CephOperateConfig(String username, String monIp, String userKey, String mountPath) {
this.username = username;
this.monIp = monIp;
this.userKey = userKey;
this.mount = new CephMount(username);
this.mount.conf_set("mon_host", monIp);
mount.conf_set("key",userKey);
mount.mount(mountPath);
}
@SneakyThrows
public static void main(String[] args) {
String name="client.devops-test";
String key="AQD6a8deA0aFKxAAGJRBBnHFCikG7hIxHUbYVA==";
String ip="10.2.2.6,10.2.2.7,10.2.2.8,10.2.2.9,10.2.2.10";
String path="/data/cephfs/blockchain/";
CephOperateConfig cephOperateConfig=new CephOperateConfig(name,ip,key,path);
cephOperateConfig.listDir(path);
}
//檢視目錄清單
public void listDir(String path) throws IOException {
String[] dirs = mount.listdir(path);
System.out.println("contents of the dir: " + Arrays.asList(dirs));
}
//建立目錄
public void mkDir(String path) throws IOException {
mount.mkdirs(path,0755);//0表示十進制
}
//删除目錄
public void delDir(String path) throws IOException {
mount.rmdir(path);
}
//重命名目錄or檔案
public void renameDir(String oldName, String newName) throws IOException {
mount.rename(oldName, newName);
}
//删除檔案
public void delFile(String path) throws IOException {
mount.unlink(path);
}
//讀檔案
public void readFile(String path) {
System.out.println("start read file...");
int fd = -1;
try{
fd = mount.open(path, CephMount.O_RDWR, 0755);
System.out.println("file fd is : " + fd);
byte[] buf = new byte[1024];
long size = 10;
long offset = 0;
long count = 0;
while((count = mount.read(fd, buf, size, offset)) > 0){
for(int i = 0; i < count; i++){
System.out.print((char)buf[i]);
}
offset += count;
}
} catch (IOException e){
e.printStackTrace();
} finally {
if(fd > 0){
mount.close(fd);
}
}
}
//複制檔案
public void copyFile(String sourceFile, String targetFile){
System.out.println("start write file...");
int readFD = -1, createAA = -1, writeFD = -1;
try{
readFD = mount.open(sourceFile, CephMount.O_RDWR, 0755);
writeFD = mount.open(targetFile, CephMount.O_RDWR | CephMount.O_CREAT, 0644);
// createAA = mountLucy.open("aa.txt", CephMount.O_RDWR | CephMount.O_CREAT | CephMount.O_EXCL, 0644);//若檔案已有, 會異常
System.out.println("file read fd is : " + readFD);
byte[] buf = new byte[1024];
long size = 10;
long offset = 0;
long count = 0;
while((count = mount.read(readFD, buf, size, -1)) > 0){
mount.write(writeFD, buf, count, -1);//-1指針跟着走,若取值count,指針不動
System.out.println("offset: " + offset);
offset += count;
System.out.println("writeFD position : " + mount.lseek(writeFD, 0, CephMount.SEEK_CUR));
}
} catch (IOException e){
e.printStackTrace();
} finally {
if(readFD > 0){
mount.close(readFD);
}
if(writeFD > 0){
mount.close(writeFD);
}
}
}
//寫檔案
public void writeFileWithLseek(String path, long offset, int type){
if(type <= 0){
type =CephMount.SEEK_CUR;
}
System.out.println("start write file...");
int writeFD = -1;
try{
writeFD = mount.open(path, CephMount.O_RDWR | CephMount.O_APPEND, 0644);
long pos = mount.lseek(writeFD, offset, type);
System.out.println("pos : " + pos);
String msg = " asdfasdfasdf123123123 \n";
byte[] buf = msg.getBytes();
mount.write(writeFD, buf, buf.length, pos);
} catch (IOException e){
e.printStackTrace();
} finally {
if(writeFD > 0){
mount.close(writeFD);
}
}
}
// 判斷是目錄還是檔案
public void listFileOrDir(){
int writeFD = -1;
try{
String[] lucyDir = mount.listdir("/");
for(int i = 0; i < lucyDir.length; i++){
CephStat cephStat = new CephStat();
mount.lstat(lucyDir[i], cephStat);
System.out.println(lucyDir[i] + " is dir : " + cephStat.isDir()
+ " is file: " + cephStat.isFile()
+ " size: " + cephStat.size
+ " blksize: " + cephStat.blksize);//cephStat.size就是檔案大小
}
writeFD = mount.open("lucy1.txt", CephMount.O_RDWR | CephMount.O_APPEND, 0644);
CephFileExtent cephFileExtent = mount.get_file_extent(writeFD, 0);
System.out.println("lucy1.txt size: " + cephFileExtent.getLength());//4M
System.out.println("lucy1.txt stripe unit: " + mount.get_file_stripe_unit(writeFD));//4M
long pos = mount.lseek(writeFD, 0, CephMount.SEEK_END);
System.out.println("lucy1.txt true size: " + pos);//30Byte
} catch (IOException e){
e.printStackTrace();
} finally {
if(writeFD > 0){
mount.close(writeFD);
}
}
}
//set current dir (work dir)
public void setWorkDir(String path) throws IOException {
mount.chdir(path);
}
//外部擷取mount
public CephMount getMount(){
return this.mount;
}
//umount
public void umount(){
mount.unmount();
}
public Boolean uploadFileByPath(String filePath, String fileName){
// exit with null if not mount
if (this.mount == null){
return null;
}
// file definition
char pathChar = File.separatorChar;
String fileFullName = "";
Long fileLength = 0l;
Long uploadedLength = 0l;
File file = null;
// Io
FileInputStream fis = null;
// get local file info
fileFullName = filePath + pathChar + fileName;
file = new File(fileFullName);
if (!file.exists()){
return false;
}
fileLength = file.length();
// get io from local file
try {
fis = new FileInputStream(file);
}catch (FileNotFoundException e){
e.printStackTrace();
}
// if file exists or not
String[] dirList = null;
Boolean fileExist = false;
try {
dirList = this.mount.listdir("/");
for (String fileInfo : dirList){
if (fileInfo.equals(fileName)){
fileExist = true;
}
}
}catch (FileNotFoundException e){
e.printStackTrace();
}
// transfer file by diff pattern
if (!fileExist){
try {
// create file and set mode WRITE
this.mount.open(fileName, CephMount.O_CREAT, 0);
int fd = this.mount.open(fileName, CephMount.O_RDWR, 0);
// start transfer
int length = 0;
byte[] bytes = new byte[1024];
while ((length = fis.read(bytes, 0, bytes.length)) != -1){
// write
this.mount.write(fd, bytes, length, uploadedLength);
// update length
uploadedLength += length;
// output transfer rate
float rate = (float)uploadedLength * 100 / (float)fileLength;
String rateValue = (int)rate + "%";
System.out.println(rateValue);
// complete flag
if (uploadedLength == fileLength){
break;
}
}
System.out.println("檔案傳輸成功!");
// chmod
this.mount.fchmod(fd, 0666);
// close
this.mount.close(fd);
if (fis != null){
fis.close();
}
return true;
}catch (Exception e){
e.printStackTrace();
}
}else if (fileExist){
try {
// get file length
CephStat stat = new CephStat();
this.mount.stat(fileName, stat);
uploadedLength = stat.size;
int fd = this.mount.open(fileName, CephMount.O_RDWR, 0);
// start transfer
int length = 0;
byte[] bytes = new byte[1024];
fis.skip(uploadedLength);
while ((length = fis.read(bytes, 0, bytes.length)) != -1){
// write
this.mount.write(fd, bytes, length, uploadedLength);
// update length
uploadedLength += length;
// output transfer rate
float rate = (float)uploadedLength * 100 / (float)fileLength;
String rateValue = (int)rate + "%";
System.out.println(rateValue);
// complete flag
if (uploadedLength == fileLength){
break;
}
}
System.out.println("斷點檔案傳輸成功!");
// chmod
this.mount.fchmod(fd, 0666);
// close
this.mount.close(fd);
if (fis != null){
fis.close();
}
return true;
}catch (Exception e){
e.printStackTrace();
}
}else {
try {
if (fis != null){
fis.close();
}
}catch (Exception e){
e.printStackTrace();
}
return false;
}
return false;
}
//檔案下載下傳
public Boolean downLoadFileByPath(String filePath,String fileName){
char pathChar= File.separatorChar;
String fileFullName="";
Long fileLength=0L;
Long downloadedLength=0L;
File file=null;
//IO
FileOutputStream fos=null;
RandomAccessFile raf=null;
//new file object
fileFullName=filePath+pathChar+fileName;
file=new File(fileFullName);
//get cephfs file size
try{
CephStat stat=new CephStat();
mount.stat(fileName,stat);
fileLength=stat.size;
}catch (Exception e){
e.printStackTrace();
}
if(fileLength!=0){
if(!file.exists()){
int length=10240;
byte[] bytes=new byte[length];
try {
int fd=mount.open(fileName,CephMount.O_RDONLY,0);
fos=new FileOutputStream(file);
float rate=0;
String rateValue="";
while ((fileLength-downloadedLength)>=length&&(mount.read(fd,bytes,(long)length,downloadedLength))!=-1){
fos.write(bytes,0,length);
fos.flush();
downloadedLength+=(long)length;
rate=(float)downloadedLength*100/(float)fileLength;
rateValue=(int)rate+"%";
System.out.println(rateValue);
if(downloadedLength.equals(fileLength)){
break;
}
}
if(!downloadedLength.equals(fileLength)){
mount.read(fd,bytes,fileLength-downloadedLength,downloadedLength);
fos.write(bytes,0,(int)(fileLength-downloadedLength));
fos.flush();
downloadedLength=fileLength;
rate=(float)downloadedLength*100/(float)fileLength;
rateValue=(int)rate+"%";
System.out.println(rateValue);
}
System.out.println("Download Successs");
fos.close();
mount.close(fd);
return true;
} catch (Exception e) {
System.out.println("download fail");
e.printStackTrace();
}
}else if(file.exists()){
int length=10240;
byte[] bytes=new byte[length];
Long filePoint=file.length();
try {
int fd=mount.open(fileName,CephMount.O_RDONLY,0);
raf=new RandomAccessFile(file,"rw");
raf.seek(filePoint);
float rate=0;
String rateValue="";
while ((fileLength-downloadedLength)>=length&&(mount.read(fd,bytes,(long)length,downloadedLength))!=-1){
raf.write(bytes,0,length);
downloadedLength+=(long)length;
rate=(float)downloadedLength*100/(float)fileLength;
rateValue=(int)rate+"%";
System.out.println(rateValue);
if(downloadedLength.equals(fileLength)){
break;
}
}
if(!downloadedLength.equals(fileLength)){
mount.read(fd,bytes,fileLength-downloadedLength,downloadedLength);
raf.write(bytes,0,(int)(fileLength-downloadedLength));
downloadedLength=fileLength;
rate=(float)downloadedLength*100/(float)fileLength;
rateValue=(int)rate+"%";
System.out.println(rateValue);
}
System.out.println("Download Successs");
raf.close();
mount.close(fd);
return true;
} catch (Exception e) {
System.out.println("download fail");
e.printStackTrace();
}
}
}else{
System.out.println("err");
return false;
}
return null;
}
}
5.2 java s3 調用osd對象存儲bucket
1.maven引入jar
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.11.327</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-ec2</artifactId>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-dynamodb</artifactId>
</dependency>
2java代碼調用
String accessKey = "S88OUVF7WC551YUJN1I4";
String secretKey = "BUaVeXZ2HgDYNjIOIVhMJROdG1Z3vFkaYnQjlntw";
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
AmazonS3 conn = new AmazonS3Client(credentials);
conn.setEndpoint("objects.dreamhost.com");
List<Bucket> buckets = conn.listBuckets();
for (Bucket bucket : buckets) {
ByteArrayInputStream input = new ByteArrayInputStream("Hello World!".getBytes());
conn.putObject(bucket.getName(), "hello.txt", input, new ObjectMetadata());
System.out.println(bucket.getName() + "\t" +
StringUtils.fromDate(bucket.getCreationDate()));
ObjectListing objects = conn.listObjects(bucket.getName());
do {
for (S3ObjectSummary objectSummary : objects.getObjectSummaries()) {
System.out.println(objectSummary.getKey() + "\t" +
objectSummary.getSize() + "\t" +
StringUtils.fromDate(objectSummary.getLastModified()));
}
objects = conn.listNextBatchOfObjects(objects);
} while (objects.isTruncated());
}
參考資料
參考部落格
ceph官網