[RP-PPPoE] PPPoE-server with many simultaneous connections
cbalint at cablesat.ro
cbalint at cablesat.ro
Wed Jul 22 07:19:08 EDT 2009
> Hi all
>
> I would like to know, if any have pppoe server with more 3.000
> simultaneous connections, and what the configurations of hardware,
> operations system and kernel used.
Yes,
~2.000 was the maximum, but i am confident
that 3.000 can be achived if ratelimit over
ppp tunnel is under 4M, and concentrator is
prepared carfully as e.g we do:
We use HTB filters in if-up.local at 4Mbit rate
per ppp session, kernel mode pppoe with following
specs:
- 2G RAM , DualCore 2.0Ghz Intel, USB stick instead
of HDD, 2xGbit Broadcom NIC (cheap second hand
IBM blades 325 Model) => ~300Eur/concentrator.
- Very important to balance IRQ / NIC, dont
use more or less than 2 NIC, and CPU be a DualCORE
or at least HyperThreded.
- Interface to clients is in TRUNK so
we dont have VLANS bigger than ~200 MAC's.
- rp-pppoe is started over all VLAN interfaces.
- irqbalance should equilibrate IRQ per NIC
smoothly, can watch using 'top' that irq_process
per NIC of kernel never rise upper than 3% / CPU
at peak rate.
- Compiled pppd with -Os -march=i686 -mcpu=i686,
we use pppd with radius, and pass through raddattr
plugin the HTB params to if-up.local scripts.
- Fedora 9 minimal, no conntrack*.ko loaded
as module into memory (it can kill machine),
so _NO_ NAT, standard kernel is enough. Fedora >
9 has standard kernel with conntrack builtin so its
bad, we avoid re-compile anything.
- rsync,rsyslog/cron/logrotate and anything that
use IRQ/IO with harddisk/usb stick should be stopped,
it can easy kill the machine at high rates,
so at ~2000 connections evrything is smooth,
- Peak rate around 2.000 CLI is at 170Mbit/s,
blade works perfectly stable, however we introduced
lately to limit to 1.000 sessions with -N 1000 on
rp-ppooe so next PADI will be served by another
blade where the rp-ppooe has empty slots, this
way we achive redundancy over blade failure and
also at 200Eur is well in any budget.
- Fedora not comes with pppd in kernel mode
so pppd and rp-ppoe are custom as described above.
- Sistem on stick is ~500M, with only the strictly
necessary stuff/tools. IBM BIOS easy boot from stick
and mount up rootfs as r/w, stick never crash, no
need for HDD (30Watt) CDROM or any other storage.
- Swap is disabled, there is no problem with the
memory anyway.
- we export all pppd /32 IPs over ospf so we dont
care ppp IPs on wich blade is located, radius alocate
randomly IP's from random classes, on any of our nodes.
I hope it encurage You to do so, i am more satisfyed with
it rather than any cisco/alcatel/nortel SLAMS , however
blade quantity doesnt mather in these conditions. ;)
>
> Regards
> _______________________________________________
> RP-PPPoE mailing list
> RP-PPPoE at lists.roaringpenguin.com
> http://lists.roaringpenguin.com/cgi-bin/mailman/listinfo/rp-pppoe
>
More information about the RP-PPPoE
mailing list