Hi James,
I want to say thanks to the responses I have received and the information
they have included to help me understand what is going on. I have some
feedback which will show my lack of understanding, but might help better
define the problem :).
I was unable to get the tunnel to forward large packets when using the
suggests parms (--tun-mtu 1500 --mtu-disc yes --mtu-dynamic 500 500). If I
removed the --mtu-disc yes, I was atleast seeing some packets traverse the
tunnel, but no amount to say it was working :(. I also tried setting
--tun-mtu 1500 down to 1300, and --mtu-dynamic 500 500 to 1500 with no
luck. I do believe I had properly compiled in --enable-mtu-dynamic, but if
there is a test to prove that, let me know.
Now, with the tunnel set up as it previously was, without the extra
parameters, I am seeing in the tunnel :
13:58:53.165345 198.166.79.70.35480 > 238.230.120.206.28376: udp 849 (DF)
13:58:53.187482 198.166.79.70.35480 > 238.230.120.206.28378: udp 1045 (DF)
Showing the pieces of the full packets traversing the tunnel with the DF
flag set. No Fragmentation bits set as there is when you actually fragment
the packets at the source. ie :
13:20:29.339380 198.166.79.70.35480 > 238.230.120.206.28378: udp 1022 (DF)
13:20:29.350812 198.166.79.70 > 238.230.120.206: (frag 38028:***@1080)
13:20:29.350886 198.166.79.70.35480 > 238.230.120.206.28376: udp 1472 (frag 38028:***@0+)
Now, I expect MTU discovery is useless in a multicast situation
as multicast is all one way and it has no way of knowing exactly
end-to-end what MTU's it might pass through? Example being my case where
the tunneling is there, but the originating box has no idea that it is
there or what it is doing?
Also, if the kernel is fraging the packets to fit them through the tunnel,
shouldn't they look like the frag packets when they go the physical
interface? The example above is typical of OpenVPN, GRE, and the mrouted
tunneling as far as what is sniffed off the tunnel.
Also, Dick St.Peters <***@NetHeaven.com> pointed out that if the
tunnel is just breaking apart bigger packets into smaller ones to fit
through the tunnel (not offically fragging them), it should reassemble
them on the other side ( I hope I paraphrased Dick ok). These packets
though appear back on the ethernet in their "shredded" size, not
reassembled. Is this correct?
Hope I have the right info for these questions,
JES
Post by James YonanJames,
OpenVPN normally leaves all fragmenting and routing issues up to the kernel.
However OpenVPN 1.4.1 has a new experimental mode that does fragmenting
itself, allowing the MTU of the TUN/TAP device to be much larger than the MTU
of the UDP connection that carries the tunnel data.
To enable this feature, you must build OpenVPN with ./configure
--enable-mtu-dynamic
Then you can use the --mtu-disc (Linux only) or --mtu-dynamic options to
explicitly control path MTU discovery and fragmenting options.
For example,
openvpn --tun-mtu 1500 --mtu-disc yes --mtu-dynamic 500 500 [other options]
would create a tunnel that looks like it has an MTU of 1500 to the OS, but
OpenVPN would actually break the packets up so that the encrypted UDP
datagrams sent between the OpenVPN peers would never be larger than 500 bytes
(not including the IP header). The "--mtu-disc yes" option tells the OS to
set the "Don't Fragment" bit on the UDP datagrams. The downside of
--mtu-dynamic is that it will always be measureably less efficient than not
fragmenting in the first place.
The --mtu-dynamic option is still experimental at this point and has two
planned features which have not been implemented yet: (a) give --mtu-dynamic a
lower and upper bound on UDP MTU size and have OpenVPN automatically choose
the largest size (using a cryptographically secure handshake) which will not
fragment, without depending on the OSes implementation of Path MTU discovery,
and (b) given our empirically and securely determined path MTU, generate our
own "Fragmentation needed but DF set" ICMP messages to bounce back over the
TUN device, effectively constraining upstream senders to an MTU which will not
cause fragmentation.
Anyway, I mention this because it might be an interesting experiment to
isolate where the problem is occurring, i.e. if --mtu-dynamic fixes the
problem, then it may point to some kind issue with multicast fragmentation in
the kernel.
I also checked the Linux kernel source, and if you look at icmp_send in icmp.c
/*
* No replies to physical multicast/broadcast
*/
if (skb_in->pkt_type!=PACKET_HOST)
return;
/*
* Now check at the protocol level
*/
if (rt->rt_flags&(RTCF_BROADCAST|RTCF_MULTICAST))
return;
It seems that this might create complications for Path MTU discovery on
multicast streams, because no "fragmentation needed but DF set" ICMP message
could be returned in response to a multicast stream that demands fragmentation.
James
Post by James MacLeanHi Folks,
I am seeing a problem which occurs both OpenVPN, GRE and mrouted's
builtin tunnels.
If a large Multicast packet gets sent to the tunnel, it gets fragmented,
or atleast broken up :). But when it comes out the other side, it appears
to never get reconstructed into it original format.
For some reason, this makes tunneled connections get a limited max
bandwidth of around 270Kbs. Almost like the kernel is getting slowed down
by them?
If I set the MTU at the originating machine down to something smaller
like 1100, Linux fragements the packets at the source, they pass through
the tunnel un-split, and appear to work fine end to end.
This is not only OpenVPN but also on GRE tunnels and mrouted builtin
tunnels. GRE and mrouted both use ipip.o in the Linux kernel if that
matters.
VIC and Rat appear to not be affected by this because their packets are
mostly smaller around 512 bytes I think.
To see it in action, Get mp4live running (part of mpeg4ip.sourceforge.net)
in multicast through a tunnel. I tried both mrouted and pimd. The results
will be that the fastest throughput you'll get is around 270Kbs :).
Maybe this is just particular to Linux? Or it is expected?
OpenVPN 1.4.1 and RedHat 9.0.
JES
--
Department of Education
Nova Scotia, Canada
B3M 4B2
-------------------------------------------------------
This SF.net email is sponsored by: ObjectStore.
If flattening out C++ or Java code to make your application fit in a
relational database is painful, don't do it! Check out ObjectStore.
Now part of Progress Software. http://www.objectstore.net/sourceforge
_______________________________________________
Openvpn-users mailing list
https://lists.sourceforge.net/lists/listinfo/openvpn-users
--
James B. MacLean ***@ednet.ns.ca
Department of Education
Nova Scotia, Canada
B3M 4B2