Everybody is talking about “open” this or that – from Cisco making claims to new companies embracing open source code as a means of developing or accelerating their go-to-market strategies. But what does “open” really mean?
One challenge in using a broad and you might say amorphous term like open is that it can lead to confusion or a negative first impression that “this is just marketing.” To get some perspective, let’s look back a bit and see how we got to this point of open and what the original intent was.
Open systems are computer systems that provide some combination of interoperability, portability, and open software standards. (“Open” can also refer to specific installations that are configured to allow unrestricted access by people and/or other computers; this article does not discuss that meaning.)
The term “open” was popularized in the early 1980s, mainly to describe systems based on Unix, especially in contrast to the more entrenched mainframes, minicomputers, and engineering workstations in use at that time. Unlike older legacy systems, the newer generation of Unix systems featured standardized programming interfaces and peripheral interconnects. Third party development of hardware and software was encouraged, which was a significant departure from the norm of the time. We saw companies such as Amdahl and Hitachi going to court for the right to sell systems and peripherals that were compatible with IBM’s mainframes.
The definition of “open system” became more formalized in the 1990s with the emergence of independently administered software standards such as The Open Group‘s Single UNIX Specification. As client/server networking took hold in the late 80s and early 90s, switching vendors followed this tightly-coupled design rationale. Every aspect of a vendor’s solution was designed around tight integration of the OS with components and subsystems, from memory allocation, to managing CPU utilization, to the forwarding ASICs. Differentiation was driven up from the system architecture designed around custom components.
In the late 90s, the component industry and “white box” or ODM (original device manufacturers) started to take more ownership of subsystem and system design. This started us back onto some degree of abstraction. Switches were being built that could have the CPU easily replaced; different types of memory components were another example.
Related to the above history, the mainframe to the PC transition, we discussed how the PC brought forth the idea of hardware and software abstraction. That brought us to the idea of the OS as something that could also be open, with a set of tools that fostered application development.
And then the server opened up. Over the last 15 years, much has changed.
On the server side, we have seen the transition from Microsoft to Linux and new business models evolving from companies like Red Hat. Then we saw abstraction re-emerge through server virtualization, and the idea of white box servers to drive the hardware agnostic thinking once again, similar to the PC
Now we are looking at a similar evolution on the network side. Some say SDN drives hardware-agnostic thinking. Having said that, many vendors still hold on to that mainframe idea that “my apps will only run best on my metal.”
So to summarize our first idea, if the network follows this seemingly well-traveled path, just like we saw with early Unix systems, third party development of hardware and software was encouraged, which was a significant departure from the norm of the time.
Here’s what hardware agnostic thinking can bring to networks. First, like PCs and servers, hardware abstraction creates operational consistency. That drives down costs over time. The second thing it brings is transparency – you can look inside to see not only Intel, but gain better visibility to truly control your traffic. The idea of external programmability opens that Cisco Pandora’s box but in a good way. Now you can decide how and when to forward traffic that might need that level of granular control.
So to a large extent, the idea of open network hardware delivers freedom to choose the capacity, port configuration and even color of your “box.”
Now with those early Unix systems, there was another attribute: standardized programming interfaces. So let’s extend the idea of open to the network to the idea of programmability, and that takes us to the ability to tune the system, which is the goal of open-source projects like OpenStack.
So how do we tune an OS or a “stack?”
In the case of OpenStack, plugins for your network OS would ensure an OpenStack command is accepted and acted upon – the network can be programmed to conform to an application need. From the stack perspective, APIs at each level would help shape the degree of tuning you did to better suit your needs.
So to summarize this aspect of open, APIs can help support open source projects, where the standardization is the common set of APIs, with the result being a stack tuned to better meet your specific needs.
And from the network OS point of view, just like you tune a server OS to meet your needs, the idea of tuning the OS for specific application environments is something to consider.
If history repeats itself, we will see more hardware abstraction on the networking side, and we will see more and more agreement amongst vendors on a common set of APIs.