SOLVED but take a look! HELP! File system used up ext4

General support questions
Post Reply
cyberwatchers
Posts: 20
Joined: 2015/02/15 12:24:32

SOLVED but take a look! HELP! File system used up ext4

Post by cyberwatchers » 2017/07/17 19:05:02

this is what I am seeing and I am not sure what to do at this point. People cannot access shares....
[root@enersurv /]# du -kscx *
1032474880 archive
0 bin
145240 boot
415885452 Data
0 dev
23256 etc
132 home
0 lib
0 lib64
16 lost+found
4 media
4 mnt
4 opt
du: cannot access ‘proc/14688/task/14688/fd/4’: No such file or directory
du: cannot access ‘proc/14688/task/14688/fdinfo/4’: No such file or directory
du: cannot access ‘proc/14688/fd/4’: No such file or directory
du: cannot access ‘proc/14688/fdinfo/4’: No such file or directory
0 proc
60 root
164684 run
0 sbin
4 srv
0 sys
24 tmp
1023244 usr
159988 var
1449876992 total
[root@enersurv /]# cd /Data
[root@enersurv Data]# ls
bleadingham Documents - Shortcut.lnk enersurv jleadingham jsinkule lost+found mhaney shensley tross
[root@enersurv Data]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 178G 178G 0 100% /
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs 3.8G 161M 3.7G 5% /run
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/md127 466M 144M 294M 33% /boot
/dev/md125 1.8T 397G 1.4T 23% /Data
/dev/sdf1 4.6T 985G 3.4T 23% /archive
//10.2.10.173/Backups 11T 3.1T 7.6T 29% /mnt/qnas
tmpfs 778M 0 778M 0% /run/user/0

root@enersurv Data]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md125 : active raid5 sdb1[1] sdd1[3] sda1[0]
1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/8 pages [0KB], 65536KB chunk

md126 : active raid1 sdc2[0] sde2[1]
195358720 blocks super 1.2 [2/2] [UU]
bitmap: 1/2 pages [4KB], 65536KB chunk

md127 : active raid1 sdc1[0] sde1[1]
500672 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>I am sure that the /Data drive is on a separate raid but this looks like its on the system side. anyway I can clean this up?

So this is resolved... I have a script which runs a backup job. Takes the /Data/ files etc and then places them on the QNAP which I have mounted in /mnt/qnas for whatever reason it took yesterdays FULL backup and instead of putting it on /mnt/qnas and it going on /mnt/qnas it went underneath: Locally!

when I unmounted /mnt/qnas and did an ls, there it was! all by itself this 185GB file... how did that happen? the full it took on the 9th is on the QNAP...

here is my script:
OMPUTER=enersurv
#REMOTECOMPUTER=fedora # name of this computer
DIRECTORIES="/Data/enersurv /Data/jleadingham /Data/bleadingham /Data/jsinkule /Data/tross /Data/shensley /Data/mhaney" # directoris to ba
ckup
BACKUPDIR=/mnt/qnas
#REMOTEBACKUPDIR=/home2/Backups/cyberserve # where to store the backups
TIMEDIR=/mnt/qnas/last-full # where to store time of full backup
TAR=/bin/tar # name and locaction of tar

#You should not have to change anything below here

PATH=/usr/local/bin:/usr/bin:/bin
DOW=`date +%a` # Day of the week e.g. Mon
DOM=`date +%d` # Date of the Month e.g. 27
DM=`date +%d%b` # Date and Month e.g. 27Sep

# On the 1st of the month a permanet full backup is made
# Every Sunday a full backup is made - overwriting last Sundays backup
# The rest of the time an incremental backup is made. Each incremental
# backup overwrites last weeks incremental backup of the same name.
#
# if NEWER = "", then tar backs up all files in the directories
# otherwise it backs up files newer than the NEWER date. NEWER
# gets it date from the file written every Sunday.

# Monthly full backup
if [ $DOM = "01" ]; then
NEWER=""
$TAR $NEWER -zcf $BACKUPDIR/$COMPUTER-$DM.tar.gz $DIRECTORIES
#scp $BACKUPDIR/$COMPUTER-$DM.tar $REMOTECOMPUTER:$REMOTEBACKUPDIR
fi

# Weekly full backup
if [ $DOW = "Sun" ]; then
NEWER=""
NOW=`date +%d-%b`

# Update full backup date
echo $NOW > $TIMEDIR/$COMPUTER-full-date
$TAR $NEWER -zcf $BACKUPDIR/$COMPUTER-$DOW.tar.gz $DIRECTORIES
#scp $BACKUPDIR/$COMPUTER-$DOW.tar $REMOTECOMPUTER:$REMOTEBACKUPDIR

# Make incremental backup - overwrite last weeks
else

# Get date of last full backup
NEWER="--newer `cat $TIMEDIR/$COMPUTER-full-date`"
$TAR $NEWER -zcf $BACKUPDIR/$COMPUTER-$DOW.tar.gz $DIRECTORIES
#scp $BACKUPDIR/$COMPUTER-$DOW.tar $REMOTECOMPUTER:$REMOTEBACKUPDIR
fi




(END)

maybe the QNAP was unavailable so it wrote locally? Could that happen?

User avatar
TrevorH
Site Admin
Posts: 33202
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: SOLVED but take a look! HELP! File system used up ext4

Post by TrevorH » 2017/07/17 20:54:39

maybe the QNAP was unavailable so it wrote locally? Could that happen?
Yes. If the mount fails and you don't check if it worked then you will write to the directory on the root filesystem where it was meant to have been mounted and fill that up instead.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

cyberwatchers
Posts: 20
Joined: 2015/02/15 12:24:32

Re: SOLVED but take a look! HELP! File system used up ext4

Post by cyberwatchers » 2017/07/17 21:06:37

daaaag! yeah I am going to have to see how I can auth to the QNAP via SCP or something. I have that option available in the script its just hashed out....

User avatar
TrevorH
Site Admin
Posts: 33202
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: SOLVED but take a look! HELP! File system used up ext4

Post by TrevorH » 2017/07/17 21:25:44

Should be easy enough to check if the directory you're about to write to is on its own mounted filesystem and bail out if not.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

cyberwatchers
Posts: 20
Joined: 2015/02/15 12:24:32

Re: SOLVED but take a look! HELP! File system used up ext4

Post by cyberwatchers » 2017/07/17 22:43:53

TrevorH wrote:Should be easy enough to check if the directory you're about to write to is on its own mounted filesystem and bail out if not.
Copy that. Will be looking into that thanks

Post Reply