Skip to content
40 changes: 25 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,28 +49,30 @@ DisTRaC creates a ceph instance that runs using RAM. As the system is volatile,
DisTRaC takes several parameters to run:
```
./distrac.sh
-i= | --interface= Network interface to use, i.e. -i=ib0 or --interface=ib0 (Required)
-s= | --size= Size of RAM to use, i.e. -s=50GB or --size=100GB (Required)
-n= | --number= Number of RAM OSDs on each host, if -s=50GB and -n=5 that will create 5 OSDs using 250GB of RAM (Required)
-t= | --type= The type of RAM Block gram, ram (brd), zram i.e. -t=gram or -t=ram or --type=zram (Required)
-f= | --folder= Folder to locate Ceph keys this allows for multiple deployments when different folders speficed.
-hf= | --hostfile= When not using UGE with a parallel environment, provide a file with a list of comma separated hosts
-pn= | --poolname= Define the name of a pool if using RADOS, i.e. -pn=example or --poolname=example
-rgw | --rgw To use a rados gateway set the flag set on i.e. -rgw or --rgw
-uid=| --uid= To create an s3 user for the rados gateway i.e. -uid=test or --uid=test
-sk= | --secretkey= This will create an acess and secret key for the user define in -uid, i.e. -sk=test or --secretkey=test
-h | --help Display help message
-i= | --interface= Network interface to use, i.e. -i=ib0 or --interface=ib0 (Required)
-s= | --size= Size of RAM to use, i.e. -s=50GB or --size=100GB (Required)
-n= | --number= Number of RAM OSDs on each host, if -s=50GB and -n=5 that will create 5 OSDs using 250GB of RAM (Required)
-t= | --type= The type of RAM Block gram, ram (brd), zram i.e. -t=gram or -t=ram or --type=zram (Required)
-f= | --folder= Folder to locate Ceph keys this allows for multiple deployments when different folders speficed.
-hf= | --hostfile= When not using UGE with a parallel environment, provide a file with a list of comma separated hosts
-pn= | --poolname= Define the name of a pool if using RADOS, i.e. -pn=example or --poolname=example
-fs= | --filesystem= To use a cephfs specifiy a location that you want the system to mount i.e. -fs=/mnt/mycephfs
-rgw | --rgw To use a rados gateway set the flag set on i.e. -rgw or --rgw
-uid=| --uid= To create an s3 user for the rados gateway i.e. -uid=test or --uid=test
-sk= | --secretkey= This will create an acess and secret key for the user define in -uid, i.e. -sk=test or --secretkey=test
-h | --help Display help message
```

Removal only requires a folder and hostfile is they where specified when running DisTRaC otherwise `./remove-distrac.sh` will suffice.

```
./remove-distrac.sh -h

-f= | --folder= Folder with Ceph keys to remove
-t= | --type= The type of RAM Block gram, ram (brd), zram i.e. -t=gram or -t=ram or --type=zram (Required)
-hf= | --hostfile= When not using UGE with a parallel environment, provide a file with a list of comma separated hosts for Ceph removal.
-h | --help Display help message
-f= | --folder= Folder with Ceph keys to remove
-t= | --type= The type of RAM Block gram, ram (brd), zram i.e. -t=gram or -t=ram or --type=zram (Required)
-hf= | --hostfile= When not using UGE with a parallel environment, provide a file with a list of comma separated hosts for Ceph removal.
-fs= | --filesystem= To remove the filesystem specifiy the mount point used i.e. -fs=/mnt/mycephfs
-h | --help Display help message

```

Expand Down Expand Up @@ -121,6 +123,14 @@ To create a ceph instance with a rados gateway, three additional parameters need

This creates a ceph instance that has rados gateway and an s3 user with the id of example and a secret and access key of example.

### Ceph instance with a mounted CephFS filesystem

To create a ceph instance with a mounted filesystem, an additional parameter needs to be passed. `-fs=` creates a filesystem, with the MDS residing on the primary node, mounting it at the location specificied by the parameter across all nodes.

```
./distrac.sh -i=ib0 -n=1 -s=80G -t=gram -fs=/mnt/mycephfs
```
This creates a cephfs instance mounted at /mnt/mycephfs for all compute nodes.

### Multiple instances of ceph at once

Expand Down
5 changes: 2 additions & 3 deletions distrac/create-brd.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ case $i in
;;
-f=*|--folder=*)
folder="${i#*=}"
mkdir $folder
mkdir $folder 2> /dev/null
shift # past argument=value
;;
*)
Expand All @@ -25,5 +25,4 @@ esac
done
# Load brd ram block module
sudo modprobe brd rd_size=`echo $(( $(numfmt $size --from iec) / 1024 )) ` max_part=1 rd_nr=$amount
sudo pvcreate /dev/ram[0-$((amount-1))] &
wait

20 changes: 20 additions & 0 deletions distrac/create-fs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
#!/usr/bin/env bash

filesystem=""
for i in "$@"
do
case $i in
-fs=*|--filesystem=*)
filesystem="${i#*=}"
mkdir $filesystem 2> /dev/null
shift # past argument=value
;;
*)
# unknown option
;;
esac
done

sudo mount -t ceph :/ $filesystem -o name=admin,secret=`ceph auth print-key client.admin`
sudo chown $USER:$USER $filesystem
sudo chmod 777 $filesystem
7 changes: 3 additions & 4 deletions distrac/create-gram.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ case $i in
;;
-f=*|--folder=*)
folder="${i#*=}"
mkdir $folder
mkdir $folder 2> /dev/null
shift # past argument=value
;;
*)
Expand All @@ -26,13 +26,12 @@ done
# This ignores root squash
cp gram.ko /tmp/
# Changes LVM so pvcreate can be used
./create-gram-lvm.sh
create-gram-lvm.sh
sudo insmod /tmp/gram.ko num_devices=$amount &
wait
for num in $(seq 0 $[amount-1])
do
echo $size | sudo tee /sys/block/gram$num/disksize &
wait
done
sudo pvcreate /dev/gram[0-$((amount-1))] &
wait

32 changes: 32 additions & 0 deletions distrac/create-mds.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
#!/usr/bin/env bash
folder="."
for i in "$@"
do
case $i in
-f=*|--folder=*)
folder="${i#*=}"
mkdir $folder 2> /dev/null
shift # past argument=value
;;
*)
# unknown option
;;
esac
done
sudo mkdir /var/lib/ceph/mds/ceph-$HOSTNAME
ceph auth get-or-create mds.$HOSTNAME osd "allow rwx" mds "allow" mon "allow profile mds" > $folder/ceph.mds.keyring
chmod 644 $folder/ceph.mds.keyring
sudo cp $folder/ceph.mds.keyring /var/lib/ceph/mds/ceph-$HOSTNAME/keyring
sudo chown -R ceph:ceph /var/lib/ceph/mds/ceph-$HOSTNAME/
sudo systemctl start ceph-mds@$HOSTNAME
create-pool.sh -pn=cephfs_data -per=0.90 -f=$folder &
wait
create-pool.sh -pn=cephfs_metadata -per=0.10 -f=$folder &
wait
ceph fs new cephfs cephfs_metadata cephfs_data &
wait
state=0
while [ $state -le 0 ]
do
state=$(ceph fs status | grep -c "active")
done
4 changes: 2 additions & 2 deletions distrac/create-mgr.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ do
case $i in
-f=*|--folder=*)
folder="${i#*=}"
mkdir $folder
mkdir $folder 2> /dev/null
shift # past argument=value
;;
*)
Expand All @@ -27,4 +27,4 @@ sudo chown -R ceph:ceph /var/lib/ceph/mgr/ceph-$HOSTNAME/
# Stating MGR daemon
sudo systemctl start ceph-mgr@$HOSTNAME
# Starting Dashboard
ceph mgr module enable dashboard
#ceph mgr module enable dashboard
3 changes: 2 additions & 1 deletion distrac/create-mon.sh
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,9 @@ osd pool default size = 1
mon pg warn min per osd = 30
mon pg warn max per osd = 166496
mon max pg per osd = 166496
osd pool default pg autoscale mode = off
" > $folder/ceph.conf
cat log.conf >> ceph.conf
distrac-config-log.sh
# Copy ceph.conf to system
sudo cp $folder/ceph.conf /etc/ceph/
# Create Keyrings
Expand Down
14 changes: 6 additions & 8 deletions distrac/create-osd.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,22 +31,20 @@ done
sudo cp $folder/ceph.client.admin.keyring /etc/ceph/
sudo cp $folder/ceph.conf /etc/ceph/
sudo cp $folder/ceph.bootstrap-osd.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring

sudo cp $folder/ceph.bootstrap-osd.keyring /etc/ceph/ceph.keyring
if [ $type == gram ]
then
./create-gram.sh -s=$size -n=$amount -f=$folder
create-gram.sh -s=$size -n=$amount -f=$folder
elif [ $type == ram ]
then
./create-brd.sh -s=$size -n=$amount -f=$folder
create-brd.sh -s=$size -n=$amount -f=$folder
elif [ $type == zram ]
then
./create-zram.sh -s=$size -n=$amount -f=$folder
create-zram.sh -s=$size -n=$amount -f=$folder
fi


# Creating OSDs using ceph-volume
for num in $(seq 0 $[amount-1])
do
sudo ceph-volume --log-path /dev/null lvm create --data /dev/$type$num &
sudo ceph-volume --log-path /dev/null lvm prepare --data /dev/$type$num
done
wait
sudo ceph-volume --log-path /dev/null lvm activate --all
19 changes: 12 additions & 7 deletions distrac/create-pool.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ case $i in
;;
-f=*|--folder=*)
folder="${i#*=}"
mkdir $folder
mkdir $folder 2> /dev/null
shift # past argument=value
;;
*)
Expand All @@ -28,18 +28,23 @@ amountOfOSDs=`cat $folder/amountOfOSDs.num`


# Gets current PGS in ceph
currentPGs=$(ceph pg stat | awk '{print $1}')
source ./calculate-pool-pg.sh
currentPGs=$(ceph pg stat 2> /dev/null | awk '{print $1}')
if [ "$currentPGs" -eq "0" ];
then
currentPGs=1
fi

source calculate-pool-pg.sh
# Works out the PGs need for pool
CalculatePoolPG $percentage $amountOfHosts $amountOfOSDs
# Creates a pool with the name passed and amout of PGs
ceph osd pool create $poolname $result &
wait
ceph osd pool create $poolname $result
echo "Creating PG's"
# Update the expected PGs active and clean to current plus new
result=$(expr $result + $currentPGs)
# Check if all pgs are active and clean
pgstat=$(ceph pg stat | grep -c "$result active+clean")
pgstat=$(ceph -s | grep -c "$result active+clean")
while [ $pgstat -le 0 ]
do
pgstat=$(ceph pg stat | grep -c "$result active+clean")
pgstat=$(ceph -s | grep -c "$result active+clean")
done
14 changes: 7 additions & 7 deletions distrac/create-rgw.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ do
case $i in
-f=*|--folder=*)
folder="${i#*=}"
mkdir $folder
mkdir $folder 2> /dev/null
shift # past argument=value
;;
*)
Expand All @@ -24,17 +24,17 @@ sudo cp $folder/ceph.client.radosgw.keyring /var/lib/ceph/radosgw/ceph-radosgw.$
# Setting folder permission to ceph user
sudo chown -R ceph:ceph /var/lib/ceph/radosgw/ceph-radosgw.$HOSTNAME/keyring
# Creating RGW Pools
./create-pool.sh -pn=.rgw.root -per=0.05 -f=$folder &
create-pool.sh -pn=.rgw.root -per=0.05 -f=$folder &
wait
./create-pool.sh -pn=default.rgw.control -per=0.02 -f=$folder &
create-pool.sh -pn=default.rgw.control -per=0.02 -f=$folder &
wait
./create-pool.sh -pn=default.rgw.meta -per=0.02 -f=$folder &
create-pool.sh -pn=default.rgw.meta -per=0.02 -f=$folder &
wait
./create-pool.sh -pn=default.rgw.log -per=0.02 -f=$folder &
create-pool.sh -pn=default.rgw.log -per=0.02 -f=$folder &
wait
./create-pool.sh -pn=default.rgw.buckets.index -per=0.05 -f=$folder &
create-pool.sh -pn=default.rgw.buckets.index -per=0.05 -f=$folder &
wait
./create-pool.sh -pn=default.rgw.buckets.data -per=0.84 -f=$folder &
create-pool.sh -pn=default.rgw.buckets.data -per=0.84 -f=$folder &
wait
# Start rados gateway
sudo systemctl start ceph-radosgw@radosgw.$HOSTNAME
7 changes: 3 additions & 4 deletions distrac/create-zram.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ case $i in
;;
-f=*|--folder=*)
folder="${i#*=}"
mkdir $folder
mkdir $folder 2> /dev/null
shift # past argument=value
;;
*)
Expand All @@ -32,7 +32,6 @@ do
wait
done
# Changes LVM so pvcreate can be used
./create-zram-lvm.sh
sudo pvcreate /dev/zram[0-$((amount-1))] &
wait
create-zram-lvm.sh


81 changes: 81 additions & 0 deletions distrac/distrac-config-log.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
#!/usr/bin/env bash
folder="."
for i in "$@"
do
case $i in
-f=*|--folder=*)
folder="${i#*=}"
mkdir $folder
shift # past argument=value
;;
*)
# unknown option
;;
esac
done

echo "
#Log to /dev/null
log flush on exit = false
log file = /dev/null
mon cluster log file = /dev/null
#Log level 0 and memory 0
debug default = 0/0
debug lockdep = 0/0
debug context = 0/0
debug crush = 0/0
debug mds = 0/0
debug mds balancer = 0/0
debug mds locker = 0/0
debug mds log = 0/0
debug mds log expire = 0/0
debug mds migrator = 0/0
debug buffer = 0/0
debug timer = 0/0
debug filer = 0/0
debug striper = 0/0
debug objecter = 0/0
debug rados = 0/0
debug rbd = 0/0
debug rbd mirror = 0/0
debug rbd replay = 0/0
debug journaler = 0/0
debug objectcacher = 0/0
debug client = 0/0
debug osd = 0/0
debug optracker = 0/0
debug objclass = 0/0
debug filestore = 0/0
debug journal = 0/0
debug ms = 0/0
debug mon = 0/0
debug monc = 0/0
debug paxos = 0/0
debug tp = 0/0
debug auth = 0/0
debug crypto = 0/0
debug finisher = 0/0
debug reserver = 0/0
debug heartbeatmap = 0/0
debug perfcounter = 0/0
debug rgw = 0/0
debug rgw sync = 0/0
debug civetweb = 0/0
debug javaclient = 0/0
debug asok = 0/0
debug throttle = 0/0
debug refs = 0/0
debug compressor = 0/0
debug bluestore = 0/0
debug bluefs = 0/0
debug bdev = 0/0
debug kstore = 0/0
debug rocksdb = 0/0
debug leveldb = 0/0
debug memdb = 0/0
debug fuse = 0/0
debug mgr = 0/0
debug mgrc = 0/0
debug dpdk = 0/0
debug eventtrace = 0/0
" >> $folder/ceph.conf
Loading