Moving to Ubiquiti Unifi

It’s no secret that I enjoy working with new technology and figuring out better ways to do things.  For the last couple of years I’ve been dissatisfied with how my internal network was configured; I was using a basic, off-the-shelf, all-in-one consumer-grade router/wireless access point, and while it normally worked okay, it didn’t always give me the insight or visibility into my network that I really wanted to have.  “If only there was a way to take a commercial-grade wireless networking system, and set it up in my apartment,” I complained to everyone would listen. “Then I could configure everything the way I want, isolate devices on specific networks, and conquer the world!”

I knew I didn’t need a solution as expensive in-depth as Cisco’s enterprise WiFi system, but wanted to graduate beyond the basic consumer networking solutions.  When I found the UniFi system in an Ars Technica review, I was hooked– but I was also still in college, and my meager budget was still too small to support a more advanced networking system.  It wasn’t long, however, before I graduated, moved to a new apartment, and suddenly had some disposable income I could throw at my home network.

I started my network with the smallest and most basic component: a UAP-AC-Lite, the cheapest wireless access point in the UniFi line.  I plugged it into my switch, installed the controller software on my computer, setup my wireless networks, and… it worked!  It was easier than I expected, which was almost disappointing.  I mean, here I was, with a fancy access point, and it didn’t even require hours of tinkering to get it to work the way I wanted?  Where’s the fun in that?

I left the WAP in place for a couple of weeks, and then decided I needed more.  I went out and bought the UniFi Security Gateway, or USG, so I could fully replace my all-in-one with some more advanced tech.  The USG required some more hand-holding to get up and running, but soon that wasn’t enough, either.  I bought a Cloud Key, and then a PoE Switch, and before I knew it I was running UniFi for basically everything on my network.

“That’s all very well and good,” I hear you say. “It’s always fun to read about somebody else spending money when they technically don’t need to. But what does UniFi actually do for you? What problem does it solve?” That’s a good question. UniFi gives me a couple of things I wanted to have; first, it gives me a network that I can expand as my needs shift. If I’m not getting WiFi in an area, I can just plug in a WAP, adopt it into the system, and voila! I have signal. Secondly, everything’s managed in one place, the UniFi Dashboard. All my equipment, and anything I add to the system, can be managed through the dashboard in real-time– and I can do it from anywhere, since I connected my Cloud Key to my Ubiquity account.

The UniFi Dashboard

This means I don’t need to worry about remembering passwords for each of my devices, which is a major plus for anyone, even if you use a password manager.  UniFi also gives me some basic deep packet inspection, which lets me keep an eye on what’s talking out to the rest of the internet from my network.

It’s not as detailed as I would like, it’s true.  I haven’t found a way to select a specific device and view all traffic from it, for example, but it’s mostly adequate for my current needs. If something pops up that might be a problem, it’s easy enough to explore and inspect to see if anything is truly amiss. As an example, the traffic stats show that remote access terminals have transferred nearly 1.25TB of data to somewhere off-network. If you don’t know what that might be, that’s a problem– a remote access terminal moving lots of data could be an indication of a compromised computer being used as part of a botnet, or could be something spying on you.

Looking at the specific DPI card for that category shows that that entire amount of data has been through SSH, which again could be an indication that something on the network is infected and is phoning home.  UniFi lets us drill deeper, however, and I can see that almost all of the traffic is from one specific machine on my network, which is configured to perform incremental syncing to the cloud via rsync. But if this had actually been a compromised machine, the dashboard could have been my first indication that something was very wrong on my network.

UniFi also lets me setup and configure a guest wireless portal, so no more needing to give guests my WiFi password.  They can just connect to my open network (named Ankh-Morpork in honor of Sir Terry Pratchett), accept the terms and conditions which warn them that their connection may not be private and to not carry out illegal activities using my WiFi, enter the password I have posted in my apartment, and voila! they can access the web on whatever device they may choose.  If they start causing issues, adding bandwidth limits and filtering specific sites is easy, as is managing which devices are connected to the guest network.

Overall, I’m quite pleased with UniFi. I have more I’d like to do (like building out vlans for my various servers), but for now the network is stable, speeds are faster than they were, and my WiFi coverage is great. I’ve been talking up UniFi with everyone that I know, and I’m slowly building out a network at my parent’s house which will let me troubleshoot remotely while increasing their speeds and security.  It costs a bit more than my previous solution, but I’m glad I made the switch.

Listing image by Thomas Jensen on Unsplash

It’s always DNS

I host a few websites for myself and family on DigitalOcean.  Up until recently, I’ve always just spun up a new droplet for each site, so they were all fully independent from each other; this was the easiest and most convenient way to get a new site up and running without jeopardizing uptime on other sites if I made a mistake in configuration, and it was drop-dead easy to map a domain to a static IP.  It had some security benefits, too– if one site was compromised, it wouldn’t affect the rest.

But it was also maintenance-intensive.  I needed to login to multiple servers to run updates; adding plugins had to be redone over and over on each server; and obviously this was starting to get expensive.  So I decided to consolidate my multiple sites on one server, using a fancy feature of WordPress called… “Multisite“.  Imaginative name, I know.

The initial configuration went well, with no real hiccups (other than my accidentally rm’ing most of Apache’s configuration files– but a quick droplet rebuild took care of that1).  The trouble started when I had moved over the sites I was consolidating, and switched the domains to point at my new Multisite server.  I spent two hours trying to figure out why one of the domains refused to point at the new server, only to discover (drumroll, please)… it was DNS.  I use Pi-Hole on my home network to block malicious sites, but it also provides a DNS caching service which usually works great.  In this case, however, it was pointing me back at the old server over and over, until the TTL finally expired2.  A quick flush of the DNS cache, and I was able to see that the domain was correctly configured.  Fifteen minutes later, I had SSL up and my plugins configured.

So what’s the lesson in all this?  Even when you think it’s not DNS… it’s DNS.

  1. Yes, I could have restored the configuration without too much difficulty, but I was early enough in the build that it was faster to just start over.
  2. I did set the TTL to a very low number when I started this process, but the old value wasn’t updated until the original one expired.
Format Aside

A new reflection attack was unveiled today which can increase the size of a DDoS attack by 51,000-fold.  It uses memcached, an object caching system designed to speed up web applications, to amplify attacks against a target.  This represents a substantial increase from previous attacks, which have used network time servers to amplify attacks 58-fold and DNS servers to amplify attacks 50-fold.

Attacks seen this week have surpassed 500 Gbps, which is pretty amazing considering only a small percentage of publicly-available memcached servers are being used to launch those attacks.  It’ll be interesting to see if any larger attacks are launched in the coming weeks… and what their targets will be.

The article over at Ars Technica is pretty good, and is worth a read.

Format Aside

Another day, another vulnerability in a widely-used software package.  Today’s bug (dubbed Optionsbleed by Hanno Böck, the journalist who documented the vulnerability) can reveal passwords and other pieces of vital information to attackers.  While not as big of a threat as Heartbleed, a similar bug which allowed attackers to snag private encryption keys for servers (which is a Bad Thing, since this is how servers verify they are who they say they are; for an explanation of how this works, see my Asymmetric Encryption explanation from last year), this should still be regarded as a significant threat.

Patches are being rolled out now; patch your systems if you haven’t already.

Format Aside

The vulnerability was patched in WordPress v4.7.2 two weeks ago, but millions of sites haven’t yet updated.  This leaves them open to a vulnerability in the WordPress REST API, which can allow malicious actors to edit any post on a site.

Ars Technica has a very nice writeup on the effects of the exploit, which has resulted in the defacement of a staggering number of websites (including the websites of Glenn Beck, the Utah Office of Tourism, and even the official Suse Linux site).  Sucuri and Wordfence also have very good articles about the effects of the vulnerability.

If you have a WordPress site, you should immediately check to make sure you’re on the latest version (v4.7.2).

Format Aside

I’ve noticed a growing trend in more advanced computer users lately: some of them have begun advocating against using antivirus software.  Instead, they suggest using browser extensions like uBlock Origin (which I use and recommend), combined with safe browsing practices, to remove the need for antivirus software altogether.  Ars Technica did a very nice write-up on this trend today, and it’s worth a look.

For what it’s worth, I still use Avast as an antivirus package.  But it hasn’t alerted me to any issues or found any viruses in at least a year, so perhaps it’s time to consider freeing up some memory on my computer.

New Host!

I’ve finally moved to a VPS on DigitalOcean, from my previous (free) shared hosting.  I did this for a couple of reasons: first, while my hosting was free for a year with my domain name, that year was almost up.  To renew my hosting for the second+ year, I would have needed to pay $38.88/year; while that’s a decent price, I looked at my options and decided that moving to DigitalOcean wouldn’t cost much more (around $30 more across the year, since I use the weekly backups option), would give me much more control over my server (now I get SSH access!), and would centralize all of my VPS instances in the same place (I’ve used DigitalOcean for several years to host various projects).

Of course, as with so many things, this migration wasn’t sparked by a simple glance at the calendar.  While I’ve intended to move my host for the last month or two, the timing was decided by my messing up a WordPress upgrade on the old site at the beginning of December.  I used the automatic updater, ignored the warnings about making sure everything was backed up first1, and told it to apply the new version.  When WordPress exited maintenance mode, I was locked out of the administration dashboard.  The public part of the website was still up and running, but the backend was locked off.  Since I was entering finals week at my university, I decided to just let it be until I had some time to come back and fix it.  Worst-case, I had backups I could restore from, and I’d been meaning to migrate my site anyway.

Of course, things didn’t work out that way.  When I finally had some time on Christmas Eve, I discovered that a complete backup hadn’t been made in months.

Yes, I committed the cardinal sin of not verifying the state of my backups.  Apparently I’d screwed something up with their configuration, and I’d never tried to restore from them before and hadn’t noticed until I needed them.  At this point, I decided that if the backups weren’t working, there was no point in trying to recover on a host that I was going to be abandoning within a month, and I spun up a WordPress droplet on DigitalOcean to hold the rebuilt site.

I still had copies of all the content that was on the site, so I’d be able to restore everything without much trouble.  Some copy/pasting and time would be required, but I could get everything back to the way it was without too much trouble.  But before I did all of that, I thought “what if I’m overlooking something really simple with the old site?”  I did a little searching, and apparently W3 Total Cache, which I used to create static pages for my site and decrease load times, can cause problems with WordPress upgrades.  I disabled that via FTP2, reloaded the site, and I was able to access the admin area again.  Turns out the simple steps that you should take before completely rebuilding everything are actually worth it.

Since I had already spun up and started configuring my new site, I decided to press onwards.  My task was made considerably easier by my being able to access WP Clone on the original site, which let me move everything from my old site to the new one in just a few minutes.  I redirected the nameservers to DigitalOcean, and ran a few last checks before calling the bulk of my work done.

The next day, when I was tidying up some loose ends and preparing to get SSL set up, I realized that my email no longer worked– my email server resided on the same server that hosted my old website, which meant I needed to find a new solution.

While I have been meaning to setup my own email server sometime soon, I wasn’t confident in my ability to get it up and running quickly, and email is one of those vital services I depend on working 100% of the time.  In years past, I would have simply used Google Apps3 to host my email, but that is no longer the free option it once was.  Luckily, I found a solution thanks to Ian Macalinao at Simply Ian, which is to use Mailgun as a free email server.  Mailgun is designed to send out massive email blasts for major companies, but they also offer a free tier for people and companies that are sending out fewer than 10,000 emails per month.  I send out a fraction of that number, so this was perfect for me (and their mass email prices seem quite reasonable, so I might even use them for that if the need ever arises).  Ian handily provided a set of instructions for how to setup the proper routing, and, while some of the menu options have changed, I was able to get my new email up and running within a few minutes.

So I’d managed to get both the site and my email up and running, but I still couldn’t get SSL up and running.  For those that don’t know, SSL stands for Secure Sockets Layer, and it’s what powers the little green padlock that you see on your address bar when you visit your bank, or PayPal, or this website.  I wrote an explanation on how it works a while back, and I suggest checking that out if you want to learn more.
One of the benefits of hosting my website on a VPS is that I don’t need to use the major third-party SSL providers to get certificates saying my server is who it says it is; I can use the free and open Let’s Encrypt certificate authority instead.  Unfortunately, I just couldn’t get the certificate to work correctly; the automated tool was unable to connect to my server and verify it, which meant that the auto-renewal process wouldn’t complete.  I could have generated an offline certificate and used that, but the certificates only last ninety days and I wasn’t looking forward to going through the setup process every three months.4  I tried creating new Virtual Hosts files for Apache, my web server, but that just created more of a problem.  Eventually, I figured out that I had misconfigured something somewhere along the line.  Rather than try to figure out which of the dozens of edits I had made was the problem, I gave up and just reverted back to a snapshot I had made before starting down the rabbit hole.5  After reverting to back before my virtual hosts meddling, I was able to successfully run the Let’s Encrypt tool, generate my certificate, and secure my site.

Lesson learned!


Photo credit Torkild Retvedt.

  1. I didn’t actually ignore this warning.  I had a backup plugin configured on the site; I figured I could probably roll back if I really needed to.
  2. If you’re in a similar situation, just renaming the plugin folder to something else– w3-total-cache to w3-total-cache123, for example– will disable it
  3. Which is now G Suite, but that sounds silly.
  4. It’s a pretty straightforward and simple process, I just know that I would forget about it at some point, the certificate would expire, and the site would have issues.  If I can automate that issue away, I would much rather do that.
  5. Snapshots are essentially DigitalOcean’s version of creating disk images of your server.  I absolutely love snapshots; they’ve saved my bacon more than once, and I try to always take one before I embark on any major system changes.
Format Aside

Ars Technica did a nice job of creating an impartial write-up on why Hillary Clinton used an external email server, and how it was actually used.  It sounds to me like there’s an institutional history of using private email to conduct business, largely due to obstructive or incompetent IT services (in fairness to the State Department IT team, there are likely a number of complicated policies and legal requirements that they’re trying to work around, which is difficult).  Still, that’s not an excuse to use a home server to manage official communication– if you must use your own email address, at least use something like Google Apps or Microsoft Exchange Online, where you have teams of people professionally managing the email environment 1.

It’s also interesting to see that the NSA basically shot down any possibility of her getting a secured mobile device; I would have thought that providing the Secretary of State– the person who comes fourth in the presidential line of succession– with secure communications at all time would be a priority for them.

You can read the full story here.


  1. Of course, there is still the issue of all email traffic being unsecured and transmitted in plaintext.  But you could use a PGP solution to reduce risks there.

Uncertainty, the Fed, and the Economy

The New York Times published this opinion piece recently, discussing the Fed’s continuing decision to delay raising rates.  While the entire article is interesting, I believe that the final paragraph is the most insightful:

Adding to the frustration is that Fed policy is not to blame for the economy’s underperformance. Congress bears much of the blame because of its tightfisted federal budgets when more government spending is needed to offset feeble spending and investment in the private sector. Still, sound policy making by the Fed requires answering to conditions as they are, not as policy makers might wish they were.

Right now, we should be spending money to stimulate the economy– cutting back is incredibly short-sighted, and could seriously damage the economy.  We should look back at other economic downturns from the past– the Great Depression, for example, was ended not by restricting government spending, but by massively increasing it (and by abolishing the gold standard, which let to the restriction in the first place)– and learn from them.  Economists have studied recessions for many years, and the Fed has done an admirable job in regulating the U.S. economy through this entire mess.  Politicians, however, often don’t understand the data, or are politically unable to make the best long-term policy.  For this reason, they should seek to reduce uncertainty in U.S. markets.
Economic uncertainty is a larger problem in the United States than we may care to admit.  John C. Williams, President and CEO of the Federal Reserve Bank of San Francisco, gave a 2012 speech in which he said that uncertainty was one of the largest problems facing the U.S. economy today:

By almost any measure, uncertainty is high. Businesses are uncertain about the economic environment and the direction of economic policy. Households are uncertain about job prospects and future incomes. Political gridlock in Washington, D.C., and the crisis in Europe add to a sense of foreboding. I repeatedly hear from my business contacts that these uncertainties are prompting them to slow investment and hiring. As one of them put it, uncertainty is causing firms to “step back from the playing field.” Economists at the San Francisco Fed calculate that uncertainty has reduced consumer and business spending so much that it has potentially added a full percentage point to the unemployment rate.

Obviously, with unemployment at 5.0% today,1 having uncertainty raise the unemployment a full percentage point is no small matter.  And on average, economic uncertainty is increasing—according to data collected by Scott Baker, Nicholas Bloom and Steven J. Davis in “Measuring Economic Policy Uncertainty” over at PolicyUncertainty.com, economic uncertainty has been trending upwards for the past fifteen years.

A chart showing the rate of economic uncertainty, along with an upwards trend line, between April 2001 and April 2016.Obviously, this trend is heavily influenced by the 2008 recession, but I find it interesting that it may be beginning to rise again.  This is possibly a result of the fluctuating oil markets, combined with the slowdown of China’s economy; but no matter the cause both the Fed and the government should seek to reduce uncertainty and continue to promote stability in the economy.


Listing image by William Warby.

  1. When he gave that speech, the unemployment rate was at 8.3%, and the Economic Uncertainty Index (EUI) was at 178.3; today the latest numbers for the EUI place the United States near 98.3.  I was unable to find any data correlating the EUI with specific unemployment rates, so at this time I cannot estimate how much of our present unemployment is a result of uncertainty in the economy.

Hacking the Hackers

Have you ever heard of Hacking Team?  It’s an Italian company specializing in “digital infiltration” products for governments, law enforcement agencies, and large corporations.  Simply put, they sell hacking tools.

You might think, given their business model, that they would monitor their own security religiously.  Last year, however, they were hacked.  Majorly hacked.  “Hundreds of Gb” of their internal files, emails, documents, and source code for their products were released online for all to inspect, as were their unencrypted passwords 1.  Also released was a list of their customers, which included the governments of the United States, Russia, and Sudan—the last being a country controlled by an oppressive regime that has been embargoed by the E.U. 2

Last Friday, the person claiming responsibility for the attack, “Phineas Phisher”, came forward with details about how they did it.  It’s worth reading through if you’re interested in security; if you’d like an explanation geared more towards the layperson, Ars Technica has a pretty good write-up/summary of the attack.

I was particularly struck by how they gained access to the network.  According to Phineas,

Hacking Team had very little exposed to the internet. For example, unlike Gamma Group, their customer support site needed a client certificate to connect. What they had was their main website (a Joomla blog in which Joomscan didn’t find anything serious), a mail server, a couple routers, two VPN appliances, and a spam filtering appliance… I had three options: look for a 0day in Joomla, look for a 0day in postfix, or look for a 0day in one of the embedded devices. A 0day in an embedded device seemed like the easiest option, and after two weeks of work reverse engineering, I got a remote root exploit…  I did a lot of work and testing before using the exploit against Hacking Team. I wrote a backdoored firmware, and compiled various post-exploitation tools for the embedded device.

Basically, to avoid detection, Phineas discovered a unique vulnerability 3 in one of their embedded devices (likely one of their routers), figured out how to use it to get into the rest of the network using that vulnerability, and then carried out the attack through that piece of hardware without anybody noticing.  No matter your feelings about the attack, this is an impressive feat.


 

  1. By the way, here’s some advice: if you are in security (or anything, really, this isn’t security-specific) you should really make sure your passwords are more secure than “P4ssword”, “wolverine”, and “universo”.  Use a passphrase instead.
  2. As an Italian company, this means that they were technically violating the embargo.
  3. These unique vulnerabilities are called a “zero-day” in computer security circles, because the hackers find it before the company maintaining the software or device does— so once the company finds it, they have zero days to mitigate damage.