Thanks for the responses. All great questions. I'll start with the last.
Ease of use and user experience is one of the main goals here. Existing virtual networking (both VPN and enterprise-scale stuff) is inconvenient. It's annoying even for experts, and impossible for non-experts. What I want is to make virtual networking as easy as, say, joining a Skype conference call. It's not there yet but the technology is designed to enable that level of usability. It's not that you can't do what this does with other tools, but doing so takes hours of jiggering with config files and port forwarding and other stuff and is basically impossible for regular people.
Usage: right now there's about 400 users online all over the world. It's gotten fairly popular among Chinese business users using it to collaborate with non-Chinese users over the wall. There's a few paying customers but not many. I have not advertised it really heavily yet, since it's still in beta and I want to make sure all the major issues are ironed out. The actual protocol has been very stable for a while... it's been over six months since there's been a significant bug report. But there are still rough spots around OS integration on Mac and Windows and a few other things I want to polish up before inviting in the hordes.
On test networks in VMs I've spun up over 100,000 nodes. I'm also doing some refactoring right now to make it easier to test on a 100% emulated pseudo-net, which will allow me to run giant tests with tens or hundreds of millions. My calculations of bandwidth show that the existing supernode architecture can accommodate up to about two million concurrent users. After that a bit of rearchitecting will be required, but I already have a strategy mapped out that doesn't require changes to the client-side protocol.
Why supernodes aren't in a config file? Goes back to usability. The idea is that the network is globally flat and unified. Anyone can join any network, etc. Fragmenting the supernodes would fragment the network, since they're the anchor points used for rapid provisioning of p2p links. (They also relay if you can't establish p2p links, which is about 3% of users right now.) If an OSS user wanted to change Defaults.cpp and recompile, they could. In the near future I'm going to make the supernode list hot-upgradeable via a signed configuration that can be pushed out to the network so I don't have to do a software release if the supernodes or their IPs change.
A supernode is just a regular node designated as such. They run exactly the same software, so there's no closed-source code there. The only closed-source code in the ecosystem right now is the web control panel at zerotier.com, but the underlying netconf master that actually allows provisioning of virtual networks is open (see netconf-master/ in the source repo). I might open the control panel up too in the future... not sure yet. Depends on what I figure out revenue model wise.
The protocol is designed to enable evolution toward a more mesh-like setup with a reduced or even eliminated role for supernodes, but I'm not willing to sacrifice user experience or speed for that. That makes it a really, really hard problem. I have some ideas but right now I'm focused on UX as I said. If I can get a real business under this I'll have resources to spend on that. Eliminating the supernodes is appealing to me both for the decentralized networking geek / cypherpunk factor and the fact that the supernodes cost me money to run. (Not a lot, but something.)
On crypto:
I'm not sure I agree on ECC. Unless someone can show a real attack, I think it's FUD. It hasn't been around as long as RSA, but it has existed for quite a while. Not only that, but right now there is a monstrous cash bounty on breaking ECC in the form of Bitcoin. ECC is vulnerable to quantum crypto, but so is DH and RSA, and right now anyone who can develop a quantum computer with enough coherent qubits to crack these can jackpot every cryptocurrency and steal a huge proportion of their current collective value. (If Bitcoin suddenly crashes and D-Wave gets very rich... :) I respect Bruce's opinion, but lots of other very well respected cryptographers including DJB do not agree. I did choose DJB's curves over the NIST ones, both for security reasons and because of the relative cleanliness of DJB's 25519 code vs. the hideous crawling mess that is OpenSSL and most other crypto libraries. I wanted a neatly encapsulated portable implementation.
But the bottom line is this: the odds of someone breaking ZT1 by breaking ECC, poly1305, or Salsa20 are almost infinitely lower than the odds of someone breaking it by finding a flaw in the protocol or a bug in the implementation. If you look at attacks against SSL, SSH, etc., pretty much all of them are attacks against the implementation not the crypto. The only crypto attack against any cryptographic protocol that I'm aware of is against RC4, which is widely regarded as a weak cipher. Even those are very difficult and require a lot of traffic analysis, compute power, and man-in-the-middle to have a chance of pulling off. But even RC4 is more than good enough to keep all but the most determined and well-funded attackers out.
If your adversary is the NSA or another nation-state, I'd suggest defense in depth-- layer your crypto and use multiple implementations and different algorithms. Also use air-gapped networks, disposable read-only OS installs like TAILS, etc., to make sure the attacker can't just backdoor your system and steal your key. That's the most likely way your crypto will get broken. That's the sort of paranoia that would be required to stay secret against a real, well-funded adversary with cryptographers on staff. I'd also stay below the radar. I'm not aware of any algorithm that can stand up to a rubber hose attack.
The reason I include a bit of a warning is that ZT1 is a new code base. I tried to use secure coding techniques throughout and think through everything, and I've run it past some security gurus and none of them could find a problem. But it hasn't been around as long as other things. Personally if I were handling very secret data like credit card numbers or pictures of dead aliens, I would always practice defense in depth rather than relying on one implementation's security. We've seen that even tried and true things like SSL can be riddled with bugs for years before anyone finds them... or until anyone publishes that they found them. I wonder how long the NSA knew about heartbleed, CRIME, and other attacks?