For optimum performance the sunit and swidth XFS mount parameters should correspond to the underlying RAID array. sunit corresponds to the stripe size and swidth corresponds to the number of drives. Here are my notes on these settings:
sunit/swidth values reported by xfs_info are in file system blocks not 512 blocks, default file system block = 4k (I think this will be chosen automagically by mkfs.xfs)
swidth = n-1 drives for raid 5 (just the stripe drives)
mount -o sunit=512,swidth=1536 /dev/md0 [mountpoint]
-Here sunit is set for 256k stripe size (512b*512b/1024b=256k) and 4 drives total in array (so 3*512)
-xfs_info will report this as sunit=64blks, swidth=192blks with the default 4kb block size. (64*4096b/1024b=256k) (64*3)
Unfortunately I don't have the references for this info, which is a shame because the info was hard to nail down.
Notes on Linux system configurations based on the personal documentation of my systems. They will hopefully be able to help out some people. It's my way of reciprocating for all those who took the time to post solutions that have helped me over the years.
Wednesday, December 29, 2010
MDADM RAID5 Data Scrubbing
If you have a RAID5 array made up of large disks the odds are good that you will experience an unreadable block while rebuilding from a failed drive. To try and find these blocks preemptively, set up regular data scrubbing using cron.
Something along the lines of:
# crontab -e
0 4 * * 3 /bin/echo check > /sys/block/md0/md/sync_action
References:
http://en.gentoo-wiki.com/wiki/RAID/Software#Data_Scrubbing
An overly dramatic article on this issue:
http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162
Something along the lines of:
# crontab -e
0 4 * * 3 /bin/echo check > /sys/block/md0/md/sync_action
References:
http://en.gentoo-wiki.com/wiki/RAID/Software#Data_Scrubbing
An overly dramatic article on this issue:
http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162
Manually unban a fail2ban banned IP address
To manually unban an IP address that fail2ban has banned:
Where fail2ban-ssh is the chain the IP is in and 1 is the position of the ip in the chain. Use iptables -L to gather this info.
iptables -D fail2ban-ssh 1
Where fail2ban-ssh is the chain the IP is in and 1 is the position of the ip in the chain. Use iptables -L to gather this info.
Make fail2ban's apache-auth work with auth_digest
By default fail2ban's apache-auth only works with auth_basic. To make it work with auth_digest:
vi /etc/fail2ban/filter.d/apache-auth.conf
delete old failregex line
failregex = [[]client <HOST>[]] .* user .* authentication failure
[[]client <HOST>[]] .* user .* not found
[[]client <HOST>[]] .* user .* password mismatch
Source:
http://www.fail2ban.org/wiki/index.php/Fail2ban:Community_Portal#Modify_.22apache-auth.conf.22_to_allow_banning_on_server_using_digest_authentication
vi /etc/fail2ban/filter.d/apache-auth.conf
delete old failregex line
failregex = [[]client <HOST>[]] .* user .* authentication failure
[[]client <HOST>[]] .* user .* not found
[[]client <HOST>[]] .* user .* password mismatch
Source:
http://www.fail2ban.org/wiki/index.php/Fail2ban:Community_Portal#Modify_.22apache-auth.conf.22_to_allow_banning_on_server_using_digest_authentication
Speed up MDADM RAID5 array using stripe_cache_size
edit /etc/rc.local
add:
echo 4096 > /sys/block/md0/md/stripe_cache_size
To make effect take place immediately:
echo 4096 > /sys/block/md0/md/stripe_cache_size
4096 was chosen after fairly extensive bonnie testing of various sizes from 256 to 8192 with and without NCQ enabled on the drives. Using this setting increased my write speeds by about 50%.
References:
NeilB's post: "You can possibly increase the speed somewhat by increasing the buffer space that is used, thus allowing larger reads followed by larger writes. This is done by increasing /sys/block/mdXX/md/stripe_cache_size"
Another one of NeilB's posts regarding this topic:
"Changing the stripe_cache_size will not risk causing corruption.
If you set it too low the reshape will stop progressing. You can then set it to a larger value and let it continue.
If you set it too high you risk tying up all of your system memory in the cache. In this case you system might enter a swap-storm and it might be rather hard to set it back to a lower value.
The amount of memory used per cache entry is about 4K times the number of device in the array."
NeilB is Neil Brown, the author of MDADM.
add:
echo 4096 > /sys/block/md0/md/stripe_cache_size
To make effect take place immediately:
echo 4096 > /sys/block/md0/md/stripe_cache_size
4096 was chosen after fairly extensive bonnie testing of various sizes from 256 to 8192 with and without NCQ enabled on the drives. Using this setting increased my write speeds by about 50%.
References:
NeilB's post: "You can possibly increase the speed somewhat by increasing the buffer space that is used, thus allowing larger reads followed by larger writes. This is done by increasing /sys/block/mdXX/md/stripe_cache_size"
Another one of NeilB's posts regarding this topic:
"Changing the stripe_cache_size will not risk causing corruption.
If you set it too low the reshape will stop progressing. You can then set it to a larger value and let it continue.
If you set it too high you risk tying up all of your system memory in the cache. In this case you system might enter a swap-storm and it might be rather hard to set it back to a lower value.
The amount of memory used per cache entry is about 4K times the number of device in the array."
NeilB is Neil Brown, the author of MDADM.
Control MDADM rebuild/reshape speed
edit etc/sysctl.conf
add:
dev.raid.speed_limit_min = 400000
dev.raid.speed_limit_max = 400000
# sysctl -padd:
dev.raid.speed_limit_min = 400000
dev.raid.speed_limit_max = 400000
or temporarily change by:
echo 400000 > /proc/sys/dev/raid/speed_limit_min
echo 400000 > /proc/sys/dev/raid/speed_limit_max
Where 400000 is the desired speed. My system doesn't have a heavy load on it so I like to max out the reshape speed using 400000 (400MB/s) max.
Useful MDADM commands
Notes: /dev/hdb{1,2,3} are current RAID members. /dev/hdb4 is a new "drive"
#RAID Status
cat /proc/mdstat
mdadm --detail /dev/md0
#Copy Partition Structure for a New Drive (Useful when adding a new drive)
sfdisk -d /dev/hdb1 | sfdisk /dev/hdb3
#Add another drive to an array
mdadm /dev/md0 --add /dev/hdb4
mdadm /dev/md0 --grow -n 4
resize2fs /dev/md0
xfs_growfs /mnt/RAID (for XFS)
#Replace a failed drive
mdadm /dev/md0 --fail /dev/hdb2
mdadm /dev/md0 --remove /dev/hdb2
//Partition the new drive same as others (info above)
mdadm /dev/md0 --add /dev/hdb4
#Increase size of array (all the drives have gotten bigger)
mdadm /dev/md0 --grow --size=max
resize2fs /dev/md
xfs_growfs /mnt/RAID (for XFS)
#Remove RAID array
//fail and remove all the drives (see replace a failed drive above)
//unmount array
mdadm --stop /dev/md0
#RAID Status
cat /proc/mdstat
mdadm --detail /dev/md0
#Copy Partition Structure for a New Drive (Useful when adding a new drive)
sfdisk -d /dev/hdb1 | sfdisk /dev/hdb3
#Add another drive to an array
mdadm /dev/md0 --add /dev/hdb4
mdadm /dev/md0 --grow -n 4
resize2fs /dev/md0
xfs_growfs /mnt/RAID (for XFS)
#Replace a failed drive
mdadm /dev/md0 --fail /dev/hdb2
mdadm /dev/md0 --remove /dev/hdb2
//Partition the new drive same as others (info above)
mdadm /dev/md0 --add /dev/hdb4
#Increase size of array (all the drives have gotten bigger)
mdadm /dev/md0 --grow --size=max
resize2fs /dev/md
xfs_growfs /mnt/RAID (for XFS)
#Remove RAID array
//fail and remove all the drives (see replace a failed drive above)
//unmount array
mdadm --stop /dev/md0
Migrate to RAID5 from a single disk
The following are notes for adding just 2 drives to an existing drive full of data in order to create a RAID5 (without backing up and restoring).
The gist of the process is to create a new RAID5 array in a degraded state using the 2 new drives. Copy the data over to the degraded array, then add the drive that originally contained the data to the array.
Notes: /dev/hdb1 and /dev/hdb2 are new drives, /dev/hdb3 is the old full drive
(You'll note that these are different partitions not drives because these notes are from when I was preparing/testing)
#Get mdadm
apt-get install mdadm (Debian/Ubuntu)
#Optionally: Create 2 unformatted partitions slightly smaller than the drive size on the new hard drives. Change flags to raid in gparted or use fdisk to change IDs to 'fd' (fdisk /dev/sda; t; 1; fd).
#Step 1: Create the RAID 5 degraded array:
mdadm -C /dev/md0 -l 5 -n 3 missing /dev/hdb1 /dev/hdb2
(the -l is a letter 'L' not a 'one')
(where /dev/hdb1 and /dev/hdb2 are the 2 new drive partitions)
#Create a file system on the RAID, ex:
mkfs /dev/md0 -t ext3
#Create a mount point and mount the RAID partition (/dev/md0)
#Copy existing files onto the raid:
rsync -avH --progress -x /existing/files/ /RAID/mountpoint/
#Clear the full drive, (optionally create unformatted partition on it of same size as other 2 drives in RAID with raid flag (or 'fd' ID; see above))
#Add the originally full drive to the array:
mdadm /dev/md0 -a /dev/hdb3
#To view status of the rebuild:
watch -n1 'cat /proc/mdstat'
(l is the number 'One' not the letter 'L')
#Check to make sure everything looks alright:
mdadm --detail /dev/md0
References:
http://gentoo-wiki.com/HOWTO_Migrate_To_RAID
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch26_:_Linux_Software_RAID
The gist of the process is to create a new RAID5 array in a degraded state using the 2 new drives. Copy the data over to the degraded array, then add the drive that originally contained the data to the array.
(You'll note that these are different partitions not drives because these notes are from when I was preparing/testing)
#Get mdadm
apt-get install mdadm (Debian/Ubuntu)
#Optionally: Create 2 unformatted partitions slightly smaller than the drive size on the new hard drives. Change flags to raid in gparted or use fdisk to change IDs to 'fd' (fdisk /dev/sda; t; 1; fd).
#Step 1: Create the RAID 5 degraded array:
mdadm -C /dev/md0 -l 5 -n 3 missing /dev/hdb1 /dev/hdb2
(the -l is a letter 'L' not a 'one')
(where /dev/hdb1 and /dev/hdb2 are the 2 new drive partitions)
#Create a file system on the RAID, ex:
mkfs /dev/md0 -t ext3
#Create a mount point and mount the RAID partition (/dev/md0)
#Copy existing files onto the raid:
rsync -avH --progress -x /existing/files/ /RAID/mountpoint/
#Clear the full drive, (optionally create unformatted partition on it of same size as other 2 drives in RAID with raid flag (or 'fd' ID; see above))
#Add the originally full drive to the array:
mdadm /dev/md0 -a /dev/hdb3
#To view status of the rebuild:
watch -n1 'cat /proc/mdstat'
(l is the number 'One' not the letter 'L')
#Check to make sure everything looks alright:
mdadm --detail /dev/md0
References:
http://gentoo-wiki.com/HOWTO_Migrate_To_RAID
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch26_:_Linux_Software_RAID
Build VLC on Debian/Ubuntu
These are the configure lines I used for building VLC and two of its dependencies. This may not be for a complete build...I don't recall why I was building it.
x264:
./configure --enable-pic
FFMPEG: (Downloaded stable 0.5.1 source, svn did not build)
./configure --enable-version3 --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-libgsm --enable-postproc --enable-libxvid --enable-libfaac --enable-pthreads --enable-libvorbis --enable-libfaad --enable-gpl --enable-x11grab --enable-nonfree --enable-shared
VLC:
./configure --enable-static --build=i486-linux-gnu --config-cache --disable-maintainer-mode --disable-update-check --enable-fast-install --enable-release --prefix=/usr --with-binary-version=2 --disable-atmo --disable-fluidsynth --disable-gnomevfs --disable-kate --disable-mtp --disable-x264 --disable-zvbi --enable-a52 --enable-aa --enable-bonjour --enable-caca --enable-dvb --enable-dvbpsi --enable-dvdnav --enable-faad --enable-flac --enable-freetype --enable-fribidi --enable-ggi --enable-gnutls --enable-jack --enable-libass --enable-libmpeg2 --enable-lirc --disable-live555 --enable-mad --enable-mkv --enable-mod --disable-mozilla --disable-nls --enable-mpc --enable-ncurses --enable-notify --enable-ogg --disable-pulse --disable-qt4 --enable-realrtsp --enable-sdl --disable-skins2 --enable-smb --enable-speex --enable-svg --enable-taglib --enable-telx --enable-theora --enable-twolame --enable-vcd --enable-vcdx --enable-vorbis --with-mozilla-pkg=libxul --enable-alsa --enable-pvr --enable-v4l --enable-v4l2 --enable-svgalib --disable-lua
References:
http://ubuntuforums.org/showthread.php?t=786095
http://www.adminsehow.com/2009/07/how-to-install-ffmpeg-on-debian-lenny-from-svn/
Side note on transcoding:
If transcoding HD and you get "can't find encoder" include "fps=25" in the transcode options.
x264:
./configure --enable-pic
FFMPEG: (Downloaded stable 0.5.1 source, svn did not build)
./configure --enable-version3 --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-libgsm --enable-postproc --enable-libxvid --enable-libfaac --enable-pthreads --enable-libvorbis --enable-libfaad --enable-gpl --enable-x11grab --enable-nonfree --enable-shared
VLC:
./configure --enable-static --build=i486-linux-gnu --config-cache --disable-maintainer-mode --disable-update-check --enable-fast-install --enable-release --prefix=/usr --with-binary-version=2 --disable-atmo --disable-fluidsynth --disable-gnomevfs --disable-kate --disable-mtp --disable-x264 --disable-zvbi --enable-a52 --enable-aa --enable-bonjour --enable-caca --enable-dvb --enable-dvbpsi --enable-dvdnav --enable-faad --enable-flac --enable-freetype --enable-fribidi --enable-ggi --enable-gnutls --enable-jack --enable-libass --enable-libmpeg2 --enable-lirc --disable-live555 --enable-mad --enable-mkv --enable-mod --disable-mozilla --disable-nls --enable-mpc --enable-ncurses --enable-notify --enable-ogg --disable-pulse --disable-qt4 --enable-realrtsp --enable-sdl --disable-skins2 --enable-smb --enable-speex --enable-svg --enable-taglib --enable-telx --enable-theora --enable-twolame --enable-vcd --enable-vcdx --enable-vorbis --with-mozilla-pkg=libxul --enable-alsa --enable-pvr --enable-v4l --enable-v4l2 --enable-svgalib --disable-lua
References:
http://ubuntuforums.org/showthread.php?t=786095
http://www.adminsehow.com/2009/07/how-to-install-ffmpeg-on-debian-lenny-from-svn/
Side note on transcoding:
If transcoding HD and you get "can't find encoder" include "fps=25" in the transcode options.
Building and installing SSHGuard
First, take my advice and don't. Use fail2ban. It's much nicer, more robust and a commonly available package for most distributions. If you insist though:
sudo apt-get install build-essential
wget http://internap.dl.sourceforge.net/sourceforge/sshguard/sshguard-1.0.tar.bz2
tar -xf sshguard-1.0.tar.bz2
cd sshguard-1.0
sudo apt-get install build-essential autoconf
try ./configure --with-firewall=iptables
If it still wont compile:
sudo apt-get install linux-headers-generic
./configure --with-firewall=iptables
sudo make
sudo make install
//copy sshguard script to /etc/init.d, chmod +x it if necessary,
//convert to unix format if necessary (see my previous post,
//or Google for info on how to do this)
sudo update-rc.d sshguard defaults
sudo iptables -N sshguard
sudo iptables -A INPUT -p tcp --dport 22 -j sshguard
sudo ip6tables -N sshguard
sudo ip6tables -A INPUT -p tcp --dport 22 -j sshguard
sudo iptables-save > iptables.conf (assuming I'm in ~/)
sudo ip6tables-save > ip6tables.conf (assuming I'm in ~/)
sudo nano /etc/rc.local
add line: iptables-restore < /home/[user]/iptables.conf
add line: ip6tables-restore < /home/[user]/ip6tables.conf
sudo apt-get install build-essential
wget http://internap.dl.sourceforge.net/sourceforge/sshguard/sshguard-1.0.tar.bz2
tar -xf sshguard-1.0.tar.bz2
cd sshguard-1.0
sudo apt-get install build-essential autoconf
try ./configure --with-firewall=iptables
If it still wont compile:
sudo apt-get install linux-headers-generic
./configure --with-firewall=iptables
sudo make
sudo make install
//copy sshguard script to /etc/init.d, chmod +x it if necessary,
//convert to unix format if necessary (see my previous post,
//or Google for info on how to do this)
sudo update-rc.d sshguard defaults
sudo iptables -N sshguard
sudo iptables -A INPUT -p tcp --dport 22 -j sshguard
sudo ip6tables -N sshguard
sudo ip6tables -A INPUT -p tcp --dport 22 -j sshguard
sudo iptables-save > iptables.conf (assuming I'm in ~/)
sudo ip6tables-save > ip6tables.conf (assuming I'm in ~/)
sudo nano /etc/rc.local
add line: iptables-restore < /home/[user]/iptables.conf
add line: ip6tables-restore < /home/[user]/ip6tables.conf
Convert from DOS text file to UNIX text file
Ahhh, those annoying DOS text files. To convert them:
$ tr -d '\15\32' < dosfile.txt > unixfile.txt
$ tr -d '\15\32' < dosfile.txt > unixfile.txt
Set up PPTP server
The last post talked about setting up a PPTP client and forwarding all of that client's traffic over the VPN. Here's my notes on setting up the server on Ubuntu (or Debian, I don't recall which I was using):
Open/Forward Ports 1721 and 47
(1721 needs both TCP/UDP I believe, 47 I don't know)
sudo apt-get install pptpd
edit /etc/pptpd.conf
localip [IP_ADDR] (any unused IP address in network)
remoteip [IP_ADDR_RANGE] (ex: "192.168.5.200-220")
(range of IPs to assign to clients)
edit /etc/ppp/chap-secrets
[username] pptpd password *
edit /etc/ppp/pptd-options
uncomment the "ms-dns" lines and insert your DNS servers
after them
Set PPTPD server to forward packets:
If "cat /proc/sys/net/ipv4/ip_forward" isn't 1, change it to 1:
To change it temporarily:
sudo su
echo 1 > /proc/sys/net/ipv4/ip_forward
To change it permanantly:
nano /etc/sysctl.conf
add the line "net.ipv4.ip_forward = 1"
Ensure server is configured to do NAT or masquerade:
# iptables --table nat --append POSTROUTING --out-interface eth0 --jump MASQUERADE
To make this permanent:
sudo iptables-save > iptables.conf (assuming I'm in ~/)
sudo nano /etc/rc.local
add line: iptables-restore < /home/[user]/iptables.conf
Sources:
http://poptop.sourceforge.net/dox/diagnose-forwarding.phtml
http://forums.bit-tech.net/showthread.php?t=132029
Open/Forward Ports 1721 and 47
(1721 needs both TCP/UDP I believe, 47 I don't know)
sudo apt-get install pptpd
edit /etc/pptpd.conf
localip [IP_ADDR] (any unused IP address in network)
remoteip [IP_ADDR_RANGE] (ex: "192.168.5.200-220")
(range of IPs to assign to clients)
edit /etc/ppp/chap-secrets
[username] pptpd password *
edit /etc/ppp/pptd-options
uncomment the "ms-dns" lines and insert your DNS servers
after them
Set PPTPD server to forward packets:
If "cat /proc/sys/net/ipv4/ip_forward" isn't 1, change it to 1:
To change it temporarily:
sudo su
echo 1 > /proc/sys/net/ipv4/ip_forward
To change it permanantly:
nano /etc/sysctl.conf
add the line "net.ipv4.ip_forward = 1"
Ensure server is configured to do NAT or masquerade:
# iptables --table nat --append POSTROUTING --out-interface eth0 --jump MASQUERADE
To make this permanent:
sudo iptables-save > iptables.conf (assuming I'm in ~/)
sudo nano /etc/rc.local
add line: iptables-restore < /home/[user]/iptables.conf
Sources:
http://poptop.sourceforge.net/dox/diagnose-forwarding.phtml
http://forums.bit-tech.net/showthread.php?t=132029
Set up PPTP client and tunnel all traffic through VPN
Be smart...don't do this. Just use OpenVPN, it's much easier. However, if for some reason you have to use PPTP:
To tunnel all traffic except DNS over VPN:
Add info for the VPN account your using to /etc/ppp/chap-secrets
ex: [username] [server] password *
Create a file (filename = name you want to call VPN connection) in /etc/ppp/peers:
Put connection info in this file
ex: pty "pptp [VPN_ADDR] --nolaunchpppd"
name [NAME]
remotename [RNAME]
require-mppe-128
refuse-eap
noauth
file /etc/ppp/options.pptp
ipparam [RNAME]
Edit the options.pptp file if you want. (I didn't change anything)
Create a script (ie AllToTunnel) in /etc/ppp/ip-up.d/ containing the following
(with the modifications indicated below):
Modifications:
change PRIMARY to the network interface used to connect to internet
change SERVER to the address of the PPTP server
change "tunnel" in the last if statement to the name of your tunnel
#!/bin/sh
# pppd ip-up script for all-to-tunnel routing
# name of primary network interface (before tunnel)
PRIMARY=eth0
# address of tunnel server
SERVER=tunnel.example.com
# provided by pppd: string to identify connection aka ipparam option
CONNECTION=$6
if [ "${CONNECTION}" = "" ]; then CONNECTION=${PPP_IPPARAM}; fi
# provided by pppd: interface name
TUNNEL=$1
if [ "${TUNNEL}" = "" ]; then TUNNEL=${PPP_IFACE}; fi
# if we are being called as part of the tunnel startup
if [ "${CONNECTION}" = "tunnel" ] ; then
# direct tunnelled packets to the tunnel server
route add -host ${SERVER} dev ${PRIMARY}
# direct all other packets into the tunnel
route del default ${PRIMARY}
route add default dev ${TUNNEL}
fi
Don't forget to chmod a+x the file after you're done.
Create a script (ie AllToTunnelDown) in /etc/ppp/ip-down.d/ containing the following (with the modifications indicated below):
Modifications:
change "tunnel" in the last if statement to the name of your tunnel
#!/bin/sh
# pppd ip-down script for all-to-tunnel routing
# name of primary network interface (before tunnel)
PRIMARY=eth0
# provided by pppd: string to identify connection aka ipparam option
CONNECTION=$6
if [ "${CONNECTION}" = "" ]; then CONNECTION=${PPP_IPPARAM}; fi
# provided by pppd: interface name
TUNNEL=$1
if [ "${TUNNEL}" = "" ]; then TUNNEL=${PPP_IFACE}; fi
# if we are being called as part of the tunnel shutdown
if [ "${CONNECTION}" = "tunnel" ] ; then
# direct packets back to the original interface
route del default ${TUNNEL}
route add default dev ${PRIMARY}
fi
Don't forget to chmod a+x the file after you're done.
References:
http://pptpclient.sourceforge.net/howto-debian.phtml#configure_by_hand
http://pptpclient.sourceforge.net/routing.phtml#all-to-tunnel
To tunnel all traffic except DNS over VPN:
Add info for the VPN account your using to /etc/ppp/chap-secrets
ex: [username] [server] password *
Create a file (filename = name you want to call VPN connection) in /etc/ppp/peers:
Put connection info in this file
ex: pty "pptp [VPN_ADDR] --nolaunchpppd"
name [NAME]
remotename [RNAME]
require-mppe-128
refuse-eap
noauth
file /etc/ppp/options.pptp
ipparam [RNAME]
Edit the options.pptp file if you want. (I didn't change anything)
Create a script (ie AllToTunnel) in /etc/ppp/ip-up.d/ containing the following
(with the modifications indicated below):
Modifications:
change PRIMARY to the network interface used to connect to internet
change SERVER to the address of the PPTP server
change "tunnel" in the last if statement to the name of your tunnel
#!/bin/sh
# pppd ip-up script for all-to-tunnel routing
# name of primary network interface (before tunnel)
PRIMARY=eth0
# address of tunnel server
SERVER=tunnel.example.com
# provided by pppd: string to identify connection aka ipparam option
CONNECTION=$6
if [ "${CONNECTION}" = "" ]; then CONNECTION=${PPP_IPPARAM}; fi
# provided by pppd: interface name
TUNNEL=$1
if [ "${TUNNEL}" = "" ]; then TUNNEL=${PPP_IFACE}; fi
# if we are being called as part of the tunnel startup
if [ "${CONNECTION}" = "tunnel" ] ; then
# direct tunnelled packets to the tunnel server
route add -host ${SERVER} dev ${PRIMARY}
# direct all other packets into the tunnel
route del default ${PRIMARY}
route add default dev ${TUNNEL}
fi
Don't forget to chmod a+x the file after you're done.
Create a script (ie AllToTunnelDown) in /etc/ppp/ip-down.d/ containing the following (with the modifications indicated below):
Modifications:
change "tunnel" in the last if statement to the name of your tunnel
#!/bin/sh
# pppd ip-down script for all-to-tunnel routing
# name of primary network interface (before tunnel)
PRIMARY=eth0
# provided by pppd: string to identify connection aka ipparam option
CONNECTION=$6
if [ "${CONNECTION}" = "" ]; then CONNECTION=${PPP_IPPARAM}; fi
# provided by pppd: interface name
TUNNEL=$1
if [ "${TUNNEL}" = "" ]; then TUNNEL=${PPP_IFACE}; fi
# if we are being called as part of the tunnel shutdown
if [ "${CONNECTION}" = "tunnel" ] ; then
# direct packets back to the original interface
route del default ${TUNNEL}
route add default dev ${PRIMARY}
fi
Don't forget to chmod a+x the file after you're done.
References:
http://pptpclient.sourceforge.net/howto-debian.phtml#configure_by_hand
http://pptpclient.sourceforge.net/routing.phtml#all-to-tunnel
Permanently Change MOTD in Debian Lenny
To permanetly change the MOTD in Debian Lenny so that it doesn't get always get re-written do the following:
Open /etc/init.d/bootmisc.sh
Find this section:
Make it look like this: (comment out the uname line):
Then edit /etc/motd.tail and put into it whatever you want your MOTD to be.
Open /etc/init.d/bootmisc.sh
Find this section:
# Update motd
uname -snrvm > /var/run/motd
[ -f /etc/motd.tail ] && cat /etc/motd.tail >> /var/run/motd
Make it look like this: (comment out the uname line):
# Update motd
# uname -snrvm > /var/run/motd
[ -f /etc/motd.tail ] && cat /etc/motd.tail >> /var/run/motd
Then edit /etc/motd.tail and put into it whatever you want your MOTD to be.
Installing Ubuntu, stuck on 40% Configuring Apt
This is probably no longer an issue as I encountered it years ago, but occasionally when installing Ubuntu on my laptop the installation would hang at 40% "configuration apt" To fix this I did the following:
Ctrl + Alt + F2 (To get to a different terminal)
# ifconfig eth0 down
# ifconfig eth1 down
...
Ctrl + Alt + F1 to get back
After install, fix /etc/apt/sources.list
In other words, uncomment alot of lines)
Ctrl + Alt + F2 (To get to a different terminal)
# ifconfig eth0 down
# ifconfig eth1 down
...
Ctrl + Alt + F1 to get back
After install, fix /etc/apt/sources.list
In other words, uncomment alot of lines)
No Bufferspace Available Error
While using my system (Ubuntu at the time) on a large, fast network (for perfectly legal means), it occasionally became inundated with connections and new connections could not be established, returning a "No Bufferspace Available" error message, ex:
$ ping anywhere
ping: sendmsg: No buffer space available
To fix this, I had to increase the ARP table space. To do this permanently:
# sysctl -p
For a temporary fix:
echo 1024 > /proc/sys/net/ipv4/neigh/default/gc_thresh1
echo 2048 > /proc/sys/net/ipv4/neigh/default/gc_thresh2
echo 4096 > /proc/sys/net/ipv4/neigh/default/gc_thresh3
$ ping anywhere
ping: sendmsg: No buffer space available
To fix this, I had to increase the ARP table space. To do this permanently:
Edit /etc/sysctl.conf and add the following lines:
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh1 = 1024
For a temporary fix:
echo 1024 > /proc/sys/net/ipv4/neigh/default/gc_thresh1
echo 2048 > /proc/sys/net/ipv4/neigh/default/gc_thresh2
echo 4096 > /proc/sys/net/ipv4/neigh/default/gc_thresh3
View available framebuffer codes
I've never had much luck using the framebuffer codes from tables available online when configuring my kernel parameters (eg. "vga="). To view the actual codes available on your system you can do the following
On Debian/Ubuntu:
# aptitude install hwinfo
# hwinfo --framebuffer
I believe hwinfo is available in OpenSuSE (it's a SuSE tool) as well, but I don't know how to obtain it for other distributions. You could probably find and download theOpenSuSE source rpm package and use that to build it if you can't find a native package.
On Debian/Ubuntu:
# aptitude install hwinfo
# hwinfo --framebuffer
I believe hwinfo is available in OpenSuSE (it's a SuSE tool) as well, but I don't know how to obtain it for other distributions. You could probably find and download theOpenSuSE source rpm package and use that to build it if you can't find a native package.
An oldie, but it may still help
Several Ubuntu releases ago, my network connection became pathetically slow when I was using my DSL connection. I still have no idea what caused it, but the following worked well to restore the connection speed (sadly, these are the only notes I've ever shared prior to this blog):
Add the following to /etc/sysctl.conf, where "[VALUE]" should either be 131072, 262144, or 524288. I had the best luck with 262144, but it might be wise to try them all:
Add the following to /etc/sysctl.conf, where "[VALUE]" should either be 131072, 262144, or 524288. I had the best luck with 262144, but it might be wise to try them all:
# Tweaks for faster broadband...
net.core.rmem_default = [VALUE]
net.core.rmem_max = [VALUE]
net.core.wmem_default = [VALUE]
net.core.wmem_max = [VALUE]
net.ipv4.tcp_wmem = 4096 87380 [VALUE]
net.ipv4.tcp_rmem = 4096 87380 [VALUE]
net.ipv4.tcp_mem = 262144 262144 [VALUE]
net.ipv4.tcp_rfc1337 = 1
net.ipv4.ip_no_pmtu_disc = 0
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_ecn = 0
net.ipv4.route.flush = 1
Then enter command 'sysctl -p' to reload sysctl settings
First Post!
Years ago, when I first started setting up my systems, I documented all the steps I took. A lot of these steps were only figured out after hours of hunting around the internet. I promised back then that to pay tribute for all those posts that helped me out, I would post my notes in the hopes they will help others. It took me several years, but I'm finally going to begin posting them and maybe some new ones along the way...
The posts will generally be crude and near copy and pastings of my actual notes, but they give examples of how to do various things, which I usually find most helpful.
The posts will generally be crude and near copy and pastings of my actual notes, but they give examples of how to do various things, which I usually find most helpful.
Subscribe to:
Posts (Atom)