KVM Bridge on Centos 6.4 is not working
Posted: 2013/07/19 08:11:39
Hi all,
I'm completely puzzled with KVM networking on a CentOS 6.4 installation. The CentOS is running on a VMware VM and has > set. (needed for nested hypervisorss). Outside from my KVM I'm running a DHCP server which should also serve the VM's withing the KVM. Therefore I had to setup a bridge on my eth0 as I did below. Unfortunatelly I do not reach any DHCP offer on my Linux KVM clients. For some reasons I do reach an DHCP offer for my Windows KVM clients but they can only ping my KVM hypervisor and not any other hosts in the same network. If I change networking to NAT I can reach all my client in the KVM hypervisor network, so networking in general is working.
Has anyone ideaa, where could I find debug information? I spend so much time into this issue, I'm completely last.
Thanks for any help.
Zeno
root@KVM network-scripts]# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.52540072fc5d yes virbr0-nic
vnet0 8000.005056864891 no eth0
vnet1
[root@KVM network-scripts]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:50:56:86:48:91
inet6 addr: fe80::250:56ff:fe86:4891/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1525795 errors:0 dropped:0 overruns:0 frame:0
TX packets:1666130 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:202578411 (193.1 MiB) TX bytes:564513663 (538.3 MiB)
[root@KVM network-scripts]# ifconfig vnet0
vnet0 Link encap:Ethernet HWaddr 00:50:56:86:48:91
inet addr:172.28.248.246 Bcast:172.28.251.255 Mask:255.255.252.0
inet6 addr: fe80::250:56ff:fe86:4891/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1524795 errors:0 dropped:0 overruns:0 frame:0
TX packets:1658660 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:181182987 (172.7 MiB) TX bytes:564047811 (537.9 MiB)
[root@KVM ~]# ip addr show vnet0
4: vnet0: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:50:56:86:b0:8b brd ff:ff:ff:ff:ff:ff
inet 16.57.147.170/23 brd 16.57.147.255 scope global vnet0
inet6 fe80::250:56ff:fe86:b08b/64 scope link
valid_lft forever preferred_lft forever
root@KVM ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0
172.28.248.0 * 255.255.252.0 U 0 0 0 vnet0
link-local * 255.255.0.0 U 1003 0 0 vnet0
default 172.28.248.1 0.0.0.0 UG 0 0 0 vnet0
[root@KVM network-scripts]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script
search qads.xxx.xx
nameserver 172.28.248.12
[root@KVM network-scripts]# service iptables status
Table: nat
Chain PREROUTING (policy ACCEPT)
num target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
num target prot opt source destination
1 MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
2 MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
3 MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
Table: mangle
Chain PREROUTING (policy ACCEPT)
num target prot opt source destination
Chain INPUT (policy ACCEPT)
num target prot opt source destination
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
num target prot opt source destination
1 CHECKSUM udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 CHECKSUM fill
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53
2 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
3 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67
4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 state RELATED,ESTABLISHED
2 ACCEPT all -- 192.168.122.0/24 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
5 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
[root@KVM network-scripts]# service ip6tables status
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
[root@KVM bridge]# cd /proc/sys/net/bridge; cat b*
0
0
0
0
0
[root@KVM ~]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536
# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
[root@KVM ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=KVM
root@KVM ~]# rpm -qa | grep bridge-utils
bridge-utils-1.2-10.el6.x86_64
I'm completely puzzled with KVM networking on a CentOS 6.4 installation. The CentOS is running on a VMware VM and has > set. (needed for nested hypervisorss). Outside from my KVM I'm running a DHCP server which should also serve the VM's withing the KVM. Therefore I had to setup a bridge on my eth0 as I did below. Unfortunatelly I do not reach any DHCP offer on my Linux KVM clients. For some reasons I do reach an DHCP offer for my Windows KVM clients but they can only ping my KVM hypervisor and not any other hosts in the same network. If I change networking to NAT I can reach all my client in the KVM hypervisor network, so networking in general is working.
Has anyone ideaa, where could I find debug information? I spend so much time into this issue, I'm completely last.
Thanks for any help.
Zeno
root@KVM network-scripts]# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.52540072fc5d yes virbr0-nic
vnet0 8000.005056864891 no eth0
vnet1
[root@KVM network-scripts]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:50:56:86:48:91
inet6 addr: fe80::250:56ff:fe86:4891/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1525795 errors:0 dropped:0 overruns:0 frame:0
TX packets:1666130 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:202578411 (193.1 MiB) TX bytes:564513663 (538.3 MiB)
[root@KVM network-scripts]# ifconfig vnet0
vnet0 Link encap:Ethernet HWaddr 00:50:56:86:48:91
inet addr:172.28.248.246 Bcast:172.28.251.255 Mask:255.255.252.0
inet6 addr: fe80::250:56ff:fe86:4891/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1524795 errors:0 dropped:0 overruns:0 frame:0
TX packets:1658660 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:181182987 (172.7 MiB) TX bytes:564047811 (537.9 MiB)
[root@KVM ~]# ip addr show vnet0
4: vnet0: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:50:56:86:b0:8b brd ff:ff:ff:ff:ff:ff
inet 16.57.147.170/23 brd 16.57.147.255 scope global vnet0
inet6 fe80::250:56ff:fe86:b08b/64 scope link
valid_lft forever preferred_lft forever
root@KVM ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0
172.28.248.0 * 255.255.252.0 U 0 0 0 vnet0
link-local * 255.255.0.0 U 1003 0 0 vnet0
default 172.28.248.1 0.0.0.0 UG 0 0 0 vnet0
[root@KVM network-scripts]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script
search qads.xxx.xx
nameserver 172.28.248.12
[root@KVM network-scripts]# service iptables status
Table: nat
Chain PREROUTING (policy ACCEPT)
num target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
num target prot opt source destination
1 MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
2 MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
3 MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
Table: mangle
Chain PREROUTING (policy ACCEPT)
num target prot opt source destination
Chain INPUT (policy ACCEPT)
num target prot opt source destination
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
num target prot opt source destination
1 CHECKSUM udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 CHECKSUM fill
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53
2 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
3 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67
4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 state RELATED,ESTABLISHED
2 ACCEPT all -- 192.168.122.0/24 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
5 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
[root@KVM network-scripts]# service ip6tables status
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
[root@KVM bridge]# cd /proc/sys/net/bridge; cat b*
0
0
0
0
0
[root@KVM ~]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536
# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
[root@KVM ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=KVM
root@KVM ~]# rpm -qa | grep bridge-utils
bridge-utils-1.2-10.el6.x86_64