Nov. 27, 2023
A company that we work closely with and who established in the mid 2000's runs a successful software business. By 2020 they had grown significantly with hundreds of deployments of their cloud and on-prem based software. Their success was in part due to their timely addressing of scalability bottlenecks as their business grew.
One such bottleneck was around support of their on-prem subset of deployments. Ops staff who supported these systems had a slew of different customer-provided VPN and remote access software needed to connect to their various networks. Each one required different login accounts and OTP tokens, or mandated hopping through RDP or Linux jump boxes. Once connected to a deployment, necessary internet access was often limited by HTTP proxies or filters. Accounts were occasionally locked due to inactivity, necessitating engagement of their customer's support desk, slowing the overall process. Many client networks also changed over time as policies and tools evolved, requiring adjustment to ops support procedures. On-prem support had become a clear burden that was sapping resources and limiting growth.
As users of OpenVPN since 2005 we began thinking about how it could be utilised to solve this problem. We came up with a simple hub-spoke topology overlay network design, based on an OpenVPN service that would join all remote systems together. It would allow support staff to easily access any remote system by simply connecting to the OpenVPN hub. It was critical that the hub was able to block inter-server communication for obvious security reasons, and that remote servers could establish links to it via HTTP proxies if needed (TCP stacking considered). Secondary goals included providing static IP addresses to connected servers, automatically generating FQDN's for them, permitting selective inter-server communication, and limiting the set of servers a support person could access based on their OpenVPN login.
The MVP was completed within 10 days and achieved all of the set goals. The resulting CLI-driven product was a game changer for our client and has been in use in their production environment for 18 months. It proved to be amazingly stable, thanks to OpenVPN. A simple command would return a list of online deployments at a glance, and each could be connected to using SSH. Connectivity went from untenable to trivial. Links were resilient and auto-reconnected in case of a temporary connectivity outage. Orchestration using Ansible was suddenly possible via the overlay network. It even availed the use of Zabbix as a monitoring solution for on-prem systems, saving money on expensive cloud-based alternatives.
We were encouraged to develop the product further and committed to creating a Web UI and JSON REST API with supporting tooling to make setup easy. We added support for connecting both Windows and Linux servers to the OpenVPN hub. We chose AWS Marketplace as a way to distribute the product (as a single instance AMI) to make it simple for others to onboard. While developing we noticed others in the overlay network space including Tailscale and ZeroTier. We were encouraged to see that a market existed for tools like this, noted our points of differentiation and persevered.
Evon, aka. Elastic Virtual Overlay Network, is the culmination of the above journey. We’re happy to provide it free (as in freedom!) via Github, and as a hosted service via EvonHub.com and AWS Marketplace.
Because Evon provides a generic, private, virtual overlay network on top of the Internet, we’re hopeful that it can be used to solve other problems that present themselves in the ever changing world of global network connectivity.