Managing your users is good. But the work that comes with creating user accounts upfront is tedious and boring. At one point we got lost in trying to hook up each system manually to our WiFi. So we decided to outsource this to our users so that they have to register their devices by themselves.
Luckily pfSense proves to be extremely flexible, so with a custom portal page and some additional scripts we are able to get the important information we need. We ask the user to provide
- Email address (to get in touch with the user) and
- Accept our Acceptable use policy.
From there the system automatically detects
- MAC address (for device identification)
- Initial IP during registration (the subnet from where the users connects tells us roughly from which geographical area the devices is connected to)
- Date of registration
Every time a new devices is connected to the network, the system redirects the very first HTTP request to our custom portal page. After entering the required fields and hitting ‘Register’ the system automatically detects MAC address and system hostname and creates a new user in our FreeRADIUS installation. If successful, the user is granted access to the network. If the system connects again, it uses RADIUS MAC authentication to see whther the user is already registered. If yes, then access is permitted. If not, the user is redirected to the portal page again.
(Note: Using solely the MAC address makes the system vulnerable to spoofing attacks. However our users typically don’t have this knowledge. At least not yet.)
During this self-signup, the user is only granted access with restricted traffic limits (we make use of pfSense’ WISPr-Bandwidth-Max-Down and WISPr-Bandwidth-Max-Up capability with low initial values). From there the admins can promote the system to higher traffic caps if found eligible.
The whole configuration/deployment process is a little bit more complex. If someone wants to dive deep into it, make sure to check out our project page.
Using pfSense with the built-in FreeRADIUS can give you quite a lot of information; they are just not always visible through the Web UI.
For instance if Radius logging is turned on you can keep track of all Captive Portal sessions by accessing the log files. This is particular useful when the users have the ability to create their own accounts on the fly through a custom portal page based on their MAC addresses. But you might want to clean old and unused accounts once in a while.
In order to spot accounts that have been inactive for some time, you need to know who connected when for the last time. With shell access, simply copy and invoke this script. This compiles a CSV list of all users ever registered with their MAC addresses together with the last time they have been connected through the Captive Portal. Import the CSV into Excel, filter and sort by the last connected column to see which accounts are ready for removal.
The result looks like this:
(Note that part of the above dump has some custom information as it is tight to they way we use pfSense with a self-register capability for new users. But it should be straight forward to customize this.)
It would be simple to do this automatically (e.g. delete every account not connected in the last 3 months), but as I have some VIP users that I don’t want to clear, I just do this once in a while manually. Additionally it could be run as a cronjob every now and then and I guess you could automatically publish it through a web page or mail it to someone. Let me know if you did this, then I can steal it from you ,)
It seems troublesome to send email especially through gmail accounts from *nix systems. Using pfSense as our Captive Portal box running on top of FreeBSD is no exception. So that’s what we did to get pfSense sending us emails through shell scripts.
I’ve tried a couple of things, but eventually sticked to a perl module (including the BSD packages mailx and msmtp), but they all didn’t work in one way or another. I came pretty far, but at the end figured out that TLS/SSL support is not build-in. And compiling the packages on the pfSense box seemed not advisable. After all it is a firewall.
To install it, simply invoke this from the shell:
pkg_add -r p5-Net-SMTP-TLS
In case you have a sligthly outdated pfSense installation like I do and this command fails, you might need to tune the package repository a little bit.
setenv PACKAGESITE ftp://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/i386/8.1-RELEASE/packages/Latest/
Afterwards use this script as an example on how to send mails through the perl module.
Don’t forget to put your gmail password in a file called send_gmail_config.txt (just the password, nothing else) and well protect it.
As I always forget them, I just put them down right now right here (they were sitting for too long in my Drafts folder). And I might update them as I go.
In my current role as the ‘IT guy’ for Partners In Health I’m also managing the whole IT including our networks in Malawi (note: It is only a rumor that ‘IT guy’ translates to everything that has a power plug…). We not only provide Internet access to our employees, but also for the government including the local Ministry of Health. As our project has grown a lot in the last few years, so did the numbers of computers that are connected to the network.
Currently we are have connect 20+ access points and with this roughly ~100 devices are connected to the network every day (designed partially on top of this). And all of them squeeze through our tiny satellite link. We came a long way with adding network management tools and traffic shaping to manage the scarce bandwidth better, but at the end we also depend on the fairness of the users: If someone (or his/her system) is misbehaving, it impacts everyone else. With this it is crucial to know who is using the network and how: Welcome to the world of pfSense.
Throughout the upcoming months some of the important lessons learned and findings are shared here. This will include topics like
With all this I guess we may run one of the biggest (if not _the_ biggest) freely available public hotspot in Malawi. I like my work in the low-resource settings…
I’m by no means an expert in WiFi network planning and installation, but over the past year I have collected some knowledge and best guesses on how things work in terms of performance. Here is an open call for everyone to correct my views and not so obvious elements that impact the speed of your wireless network.
Ever had to track down a client devices in an area covered by many (unmanaged) WiFi Access Points?
If tools like kismet/kismac are not working for you to track computers (e.g. because of unsupported hardware, crashes, bad wifi antenna on your laptop, then just build it yourself.
All you need is a flashable consumer-level wifi access point (like a Linksys WRT54) and flash it with dd-wrt. This can put the router in monitor mode and together with the addon wi-viz you get an overview over all wireless activity.
And in case you want it mobile, simply put a bunch of batteries, e.g. 8 AA batteries with 1.5V each in a row and connect it to the router. Finally connect your laptop with an Ethernet cable, stuff everything in a little bag and walk around. If you know the MAC address of the devices your are looking for (laptop, handheld,…) just see very you have the strongest signal and walk in that direction. And voilà, you are able to geographically localize every WiFi device. Regardless of it’s connection to a specific access point. Welcome wiFi Catcher. Welcome Jack Bauer.
Sometimes I prefer to have a private (read secure and non-observable) web connection – being a developer and admin makes you a bit more paranoid…
So how can you establish a connection that besides from being non-observable may also bypass potential content filters or firewall rules? Of course with a simple SSH tunnel:
ssh -D 8080 -f -C -q -N user@server
Now simply configure your browser to use the SOCKS proxy running on your localhost at port 8080 and off you go.
The drawback is, that you need a Unix server outside to connect to. But who has not such a system somewhere? And even if not by now, maybe just go in the clouds.
And for those unlucky guys running Windows and PuTTY: even you could do that.
Update: It seems like SSH can even be misused to tunnel Remote Desktop connections. This might do the trick:
sudo ssh -D 8180 -p 8999 <SSH user>@<public external IP> -L 127.0.0.1:3333:<internal IP of target RDP system>:3389
Sometimes a process should just run for a maximum amount of time. A nightly long network transfer, a backup, or a statistical report shouldn’t accidentally run until next business hours. A watchdog timer which kills a process once a certain time has passed by is needed.
As usual in Shell programming multiple ways are possible, all of them have certain drawbacks. The simplest one would be just to call the long running process in the background and capture its PID. But then it is not that easy to capture the return value of it. Here is my shot with an additional at job:
TIMEOUT=1 # in minutes
# Install at job as watchdog to remove long running process
echo &quot;# watchdogfile script&quot; &gt; $WATCHDOG_CMDFILE
echo &quot;kill -0 `echo $MY_PID` 2&gt;/dev/null&quot; &gt;&gt; $WATCHDOG_CMDFILE
echo &quot;if [ $? -eq 0 ]; then&quot; &gt;&gt; $WATCHDOG_CMDFILE
echo &quot; ps -o pid= --ppid `echo $MY_PID` | xargs kill&quot; &gt;&gt; $WATCHDOG_CMDFILE
echo &quot; echo &quot;long running process aborted because it ran too long&quot;&quot; &gt;&gt; $WATCHDOG_CMDFILE
echo &quot;fi&quot; &gt;&gt; $WATCHDOG_CMDFILE
echo &quot;rm -f `echo $WATCHDOG_CMDFILE`&quot; &gt;&gt; $WATCHDOG_CMDFILE
at -f $WATCHDOG_CMDFILE now + $TIMEOUT min
# Start my very sophisticated long running task
sleep 3600 # 1 hour
# do whatever you normally do after the long running process finishes
(Note that at sends out emails with the stdout/stderr. If you have another notification method to indicate an aborted job, ensure that nothing is printed to std & stderr.)
OK. Not really “Programming stuff” anymore. But still important:
Don’t allow remote root logins at all, even with SSH
Why not? Because:
- User name of root is known, therefore the account is vulnerable for brute-force attacks
- Working as root should be an explicit switch and not the default policy. Just like being aware of switching hats.
- Bad for auditing, if multiple users have root access.
But now you say: “I don’t care as I use SSH for logins”.
- Depending on the auth method, password is still transfered over the wire.
- But using public key auth instead of passwords might be even worse. Still you have to trust all(!) clients that the private key is stored safely. Read:
- “Good enough” passphrase.
- There is no way to tell from the public key (which is the only thing known by the server), if the private key has a passphrase at all.
- Trust the client system (that it is not compromised)
- Auto lock of the client system must be enabled after a few minutes of inactivity.
- Sensible use of background daemons like ssh-agent or Pageant(Putty for Windows) on client systems necessary. What if the users start the keyring app, enters his passphrase and never shuts down his system. And now imagine a laptop running out-in-the-wild without any local password protection having an open private key in its memory!
What to do? Dunno. Maybe:
- Use “ordinary” user accounts
- SSH with either public key or passwd auth (depending on your decisions reg the previous points)
- Enforce sudo (better) or su (less better) to gain temporarily root privileges