Custom Search

Sunday, June 26, 2016

apache, wsgi log python print statement in apache log


$sudo vim /etc/apache2/mods-available/wsgi.conf
WSGIRestrictStdout Off

then restart apache

then check log file
$sudo tail -f  /var/log/apache2/horizon_error.log

How to share file and directory from vagrant VM using port forwarding and python SimpleHTTPServer

1)
Run following commands in baremetal node when vagrant VMs are running
$ sudo sysctl net.ipv4.ip_forward=1
$ sudo iptables -t nat -L
$ sudo iptables -t nat -A PREROUTING -p tcp -d 10.140.15.64 --dport 8085 -j DNAT --to-destination 192.168.56.20:8083
$ sudo iptables -t nat -A POSTROUTING -j MASQUERADE


* Replace -A with -D to delete the rule.
* Run this commands in baremetal node "10.140.15.64".
* 10.140.15.64 === IP of baremetal node where vagrant with virtualbox is running.
* 192.168.56.20 === Hostonly adapter IP of virtualbox VM running on baremetal node

2)
Run python SimpleHTTPServer from the directory which you want to share.
Run this command in your vagrant VM.
$ sudo python -m SimpleHTTPServer 8085

3)
Access shared directory from your laptop
http://10.140.15.64:8083




port forwarding with iptables to access horizon running in vagrant VM which run in remote baremetal node

$ sudo sysctl net.ipv4.ip_forward=1
$ sudo iptables -t nat -L
$ sudo iptables -t nat -A PREROUTING -p tcp -d 10.140.15.64 --dport 8081 -j DNAT --to-destination 192.168.56.20:80
$ sudo iptables -t nat -A POSTROUTING -j MASQUERADE


* Replace -A with -D to delete the rule.
* Run this commands in baremetal node "10.140.15.64".
* 10.140.15.64 === IP of baremetal node where vagrant with virtualbox is running.
* 192.168.56.20 === Hostonly adapter IP of virtualbox VM running on baremetal node
* We can access horizon running in 192.168.56.20 from our laptop like http://10.140.15.64:8081/dashboard

1)
Example:


a)
Check IP Forwarding in bare-metal node:

$ sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0

b)
Enable IP Forwarding 
in bare-metal node:
$ sudo sysctl net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1

c)
Check rules 
in bare-metal node:
$ sudo iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination        

Chain INPUT (policy ACCEPT)
target     prot opt source               destination        

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination

b)
Add your forwarding rule in
 bare-metal node:
$ sudo iptables -t nat -A PREROUTING -p tcp -d 10.140.15.64 --dport 8081 -j DNAT --to-destination 192.168.56.20:80

Tips:
* add multiple hostonly interfaces to Virtualbox VM and use one of the interface's IP which we can ping from bare-metal as destination IP in iptables rule.

e)
Check rules 
in bare-metal node:
$ sudo iptables -t nat -L

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination        
DNAT       tcp  --  anywhere             10.140.15.64         tcp dpt:tproxy to:192.168.56.20:80

Chain INPUT (policy ACCEPT)
target     prot opt source               destination        

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination

f)
Ask IPtables to Masquerade 
in bare-metal node:
$ sudo iptables -t nat -A POSTROUTING -j MASQUERADE

g)
Check rules 
in bare-metal node:
$ sudo iptables -t nat -L

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination        
DNAT       tcp  --  anywhere             10.140.15.64         tcp dpt:tproxy to:192.168.56.20:80

Chain INPUT (policy ACCEPT)
target     prot opt source               destination        

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination        
MASQUERADE  all  --  anywhere             anywhere

h)
Access web service running in the VM from laptop.

http://10.140.15.64:8081/


port forwarding with ssh to access horizon running in vagrant VM which run in remote baremetal node

$ssh -L 10.140.15.64:8081:192.168.56.20:80 vagrant@192.168.56.20
password: vagrant

* Run this command in baremetal node "10.140.15.64".
* 10.140.15.64 === IP of baremetal node where vagrant with virtualbox is running.
* 192.168.56.20 === Hostonly adapter IP of virtualbox VM running on baremetal node
* We can access horizon running in 192.168.56.20 from our laptop like http://10.140.15.64:8081/dashboard

1)
configuration of bare-metal node where vagrant VM is running


a)
physical interface of baremetal node

$ ifconfig eth3  <====
eth3      Link encap:Ethernet  HWaddr 11:11:11:11:11:11 
          inet addr:10.140.15.64  Bcast:10.140.15.255  Mask:255.255.255.0
          inet6 addr: fe80::3aea:a7ff:fe11:7ac9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:86183263 errors:0 dropped:0 overruns:0 frame:0
          TX packets:38261525 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:26521496912 (26.5 GB)  TX bytes:6480246966 (6.4 GB)

b)
Hostonly adapter interface of virtualbox

$ ifconfig vboxnet0  <====
vboxnet0  Link encap:Ethernet  HWaddr 0a:00:27:00:00:00 
          inet addr:192.168.56.1  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::800:27ff:fe00:0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:255940 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:79238841 (79.2 MB)
c)
routes in baremetal node
$ route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.140.15.1     0.0.0.0         UG    0      0        0 eth3
10.140.15.0     0.0.0.0         255.255.255.0   U     0      0        0 eth3
169.254.169.254 10.140.15.4     255.255.255.255 UGH   0      0        0 eth3
192.168.56.0    0.0.0.0         255.255.255.0   U     0      0        0 vboxnet0 <====

2)
configuration of vagrant VM

a)
$ ifconfig br-ex <====

br-ex     Link encap:Ethernet  HWaddr 08:00:27:ad:e2:80 
          inet addr:192.168.56.20  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::f474:80ff:fef7:422f/64 Scope:Link
          inet6 addr: 2001:db8::2/64 Scope:Global
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:676 errors:0 dropped:0 overruns:0 frame:0
          TX packets:479 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:119840 (119.8 KB)  TX bytes:675959 (675.9 KB)

b)
run tcpdump in br-ex interface to check the traffic

$ sudo tcpdump -i br-ex






Friday, June 24, 2016

export no_proxy multiple ip addresses localhost 127.0.0.1

$export no_proxy="192.168.56.10,localhost,127.0.0.1"
Don't use space after comma


Thursday, June 23, 2016

Solved OpenStack No rejoin-stack.sh script in devstack setup use stack-screenrc


$cd devstack
$screen -c stack-screenrc




How to Configure Virtualbox Ubuntu NAT and Host Only enp0s3 enp0s8 Network Ubuntu

$sudo vim /etc/network/interfaces

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto enp0s3
iface enp0s3 inet static
address 10.0.2.11
netmask 255.255.255.0
gateway 10.0.2.2

auto enp0s8
iface enp0s8 inet static
address 192.168.56.5
netmask 255.255.255.0



How to Configure Static IP Address and Set DNS in Ubuntu 16.04 Desktop and Server

$sudo vim /etc/network/interfaces

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto enp0s3
iface enp0s3 inet static
address 10.0.2.11
netmask 255.255.255.0
gateway 10.0.2.2

auto enp0s8
iface enp0s8 inet static
address 192.168.56.5
netmask 255.255.255.0


Tuesday, June 14, 2016

python django how to find user information from session key

from django.contrib.sessions.models import Session
from django.contrib.auth.models import User

session_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxx'

session = Session.objects.get(session_key=session_key)

uid = session.get_decoded().get('_auth_user_id')

user = User.objects.get(pk=uid)

print user.username, user.get_full_name(), user.email


Friday, June 10, 2016

reserved IP Addresses in a network subnet

Network/Subnet cidr is '10.64.200.32/28' and you can allocate 12 IPs from this subnet.

Following Ips are got allocated from this subnet.
10.64.200.36
10.64.200.35
10.64.200.37
10.64.200.38
10.64.200.39
10.64.200.40
10.64.200.41
10.64.200.42
10.64.200.43
10.64.200.44
10.64.200.45
10.64.200.46

following addresses are reserved in this subnet.
10.64.200.32 -> host
10.64.200.33 -> gateway
10.64.200.34 -> service address
10.64.200.47 -> broadcast

Sunday, June 5, 2016

How to upgrade Ubuntu 14.04 LTS to 16.04 LTS Desktop

a)
$ sudo apt-get update

b)
Upgrade all installed packages to new version
$ sudo apt-get upgrade

c)
Upgrade to latest version of distribution
$ sudo apt-get dist-upgrade

d)
Check version
$ lsb_release -a

e)
Upgrade to latest release
$ sudo update-manager -d

f)
Check version
$ lsb_release -a

OR

a)
Open "Software Updater" and upgrade all packages to new version

b)
Upgrade to latest release
$ sudo update-manager -d


How to upgrade from Ubuntu 14.04 LTS to 16.04 LTS Server

a)
$ sudo apt-get update

b)
$ sudo apt-get upgrade

c)
$ sudo apt-get dist-upgrade

d)
Reboot
$ sudo init 6



e)
$ sudo apt-get install update-manager-core

f)
set Prompt=lts
$ sudo vi /etc/update-manager/release-upgrades

g)
$ sudo do-release-upgrade -d

h)
Reboot
$ sudo init 6



Friday, June 3, 2016

How to enable request logging of haproxy

1)
Enable debug log in haproxy
$sudo vim /etc/haproxy/haproxy.cfg
global
  log  127.0.0.1 local0 debug


* Please note the name "local0"
* Please note the node "debug"

2)
Open UDP port for syslog

$sudo vim /etc/rsyslog.conf
# provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514


3)
Configure the location of haproxy log
$sudo vim /etc/rsyslog.d/haproxy.conf

# Send HAProxy messages to a dedicated logfile
#if $programname startswith 'haproxy' then /var/log/haproxy.log
#&~

#note: comment above "if then" and "&~" lines and add line below.
local0.* -/var/log/haproxy.log

* "local0" you can see at /etc/haproxy/haproxy.cfg

4)
Restart rsyslog

$restart rsyslog

5)
Restart haproxy

$sudo service haproxy restart

6)
Check log

$sudo tail -f /var/log/haproxy.log

Jun  3 08:07:56 localhost haproxy[31576]: 192.168.100.158:43738 [03/Jun/2016:08:07:56.830] api api/192.168.100.188 0/0/118 1184 -- 7/1/1/0/0 0/0
Jun  3 08:07:56 localhost haproxy[31576]: 192.168.100.188:52231 [03/Jun/2016:08:07:56.914] api api/192.168.100.158 0/0/61 1513 -- 5/0/0/0/0 0/0
Jun  3 08:07:57 localhost haproxy[31576]: 192.168.100.158:43741 [03/Jun/2016:08:07:57.170] api api/192.168.100.171 0/0/90 634 -- 9/0/0/0/0 0/0

Thursday, June 2, 2016

How to Linux reduce TCP TIME_WAIT

1)
Check current value of TCP TIME_WAIT


$sudo sysctl -a | grep conntrack
$sudo sysctl -a | grep conntrack | grep time_wait

net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120


2)
Update the value of TCP TIME_WAIT


$sudo vim /etc/sysctl.conf
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 60

3)
Save the changes

$sudo sysctl -p

4)
Check again

$sudo sysctl -a | grep conntrack
$sudo sysctl -a | grep conntrack | grep time_wait

net.netfilter.nf_conntrack_tcp_timeout_time_wait = 60