Archive

Archive for the ‘Pica8 Deep Dive’ Category

Jul
07

Its summer! And that usually means most people go on vacation and in my house at least, that means we go camping at least once. We recently went to Camp Mather, up in the Sierras near Yosemite.

 ”"

At the camp, you can stay in the tent section, or the cabin section. This idea of ‘sub camps’ struck me as we are seeing SDN break into sub camps as well. Here, by camp we mean different users focused on a specific area for SDN to solve their specific problems.

Let’s take a look at the four camps we see.

Enterprise Data Center – This is where the idea of network virtualization and SDN started. The key driver was extending application and server agility to the network. The common use case we see today is extending high availability and disaster recovery.

We think of VXLAN (or Labeled BGP) for overlays and see daily water balloon battles between the different cabins. From the Cisco ACI / APIC group, challenging pretty much every other team. VMware has its lead because of their market presence and NSX (from Nicira) around the idea of overlays to enable virtual machine (VM) mobility between network domains. Of course we see lots of challengers here, from Midokura and PLUMgrid to Brocade’s Open Vyatta.

Enterprise LAN – This camp is still nascent compared to the Enterprise Data Center. Companies like HP launched an App Store and showed early use cases around collaboration (Microsoft) and security (DDoS for BYOD).These are real pain points for the Enterprise. We all see the plethora of new devices we want to bring to work, and CIOs are constantly keeping Security as their top spend bucket.

We saw early adopters looking at OpenFlow as a means for universal programmability. In this camp, users want turnkey solutions ‘that just work.’ Companies like Ecode Networks and Pertino aim to create dashboards that give users ‘Apps’ to click on. We will get there!

Carrier/SP Data Center – This camp is very active, most early adopters of SDN are in this area. The benefit here is similar to Enterprise Data Centers, to drive network agility. For service providers, this takes on many faces. From DR-as-a-Service (DRaaS) to ensuring that secure multi-tenant services can be scaled.

All the players from the Enterprise Data Center are here as well, however. OpenStack is the leading orchestration platform being tested. Integration with OpenStack is key whether you are using the latest network plugin (Neutron at the time of this blog) or you are leveraging OpenFlow through OpenDaylight, you can find a way to jump on this wave. From a service provider perspective, they wish to see services more fluid, in how they are provisioned more dynamically. And of course the idea of tailoring services for each customer is unthinkable without some degree of SDN to bring more policy-based thinking to networking.

Service Provider WAN – Whether you are a large carrier or regional service provider, your goals are similar: find ways to drive up revenue per user (called average revenue per user (ARPU) on the mobile side), and take share from your competitor. SDN can help as it offers a more real time means to build services that are flexible and customer controllable. We see leading edge service providers doing this today from all regions of the globe.

The reason this is such a hot camp is that service provisioning here means not only manual change, it means truck rolls. And whereas you can send a data center technician down a data center row to modify 50 cabinets, you sure have to use more time to roll a truck and technician to say 50 buildings in a part of your service area.

For this camp we are seeing the emergence of white boxes for Network Function Virtualization (NFV) as a part of a virtual CPE device. And of course white box switches aggregate these vCPEs. Companies like Viptela, Versa and Nuage (ALU) are seeing successes. We also see SDN-WAN companies like Glue and Silver Peak jumping in.

These four camps will grow, morph and hopefully we can see some lessons learned shared between the camps. So this summer, make sure you pick your camp, get started and get into some water balloon tosses to keep cool! SDN is heating up in a camp near you!

Jun
23

June usually signals two things in my household: the end of the school year, and the beginning of the trips to the multiplex for the latest family-friendly animated movie. This year is no different, and from everything we’ve heard, Disney / Pixar’s latest entrant, Inside Out, is a winner.

While animated and emotion-based avatars are cute and funny, it’s the reverse concept that’s driving a lot of service provider thinking. And that is, thinking from the ‘Outside In’.

What do I mean by this? It all depends on the point of view. For a service provider that’s managing a network, be it global, regional, or metro, there’s a natural tendency to think about starting from the core and extending it out to edge. For this network, it’s important to have a reliable, super fast core – big fast iron that can process packets and bandwidth at really fast rates.

This is certainly important, but in order to differentiate and add value to their customers, service providers are investing more at the edge. They are thinking about how to wrap up and package network functionality, offer these up as monetized services, and distribute these all the way to the customer premises. They have to do this, and the requirements are very different from the core:

  • Disaggregation – Having cost-effective, reliable hardware platforms to deliver services to the customer is the first consideration. Disaggregation of software from the network hardware has many advantages in this regard. The white box trend enables providers to objectively look at the bandwidth, performance, and scale requirements, choose the hardware accordingly, and not have to worry about rip and replace upgrades and truck rolls down the road.
  • Software-defined networking (SDN) – When it comes to flexibility and dynamic provisioning, it’s all about the software, and it’s no different in this scenario. Service providers are looking for a sustainable OpEx model when it comes to provisioning and scaling their services. SDN provides a way for them to turn these services on (or off as the case may be), without having to manually touch each and every box out at the customer premises.
  • APIs and automation – More and more, the game is about service differentiation. The conversation has evolved from just bandwidth to bandwidth plus services such as firewall, voice, DPI, application acceleration, and whatever else might be on the horizon. The network devices at the edge need to have the right interfaces and programmability to handle integration with service gateways.

These requirements scream for white box networking with SDN capabilities. It’s what makes the portfolio at Pica8 more and more exciting. Recently, we announced support for two new platforms from Edge-Core, including one that supports Power-over-Ethernet (PoE).

Taking a step back, this completes the picture for service providers, giving them the flexibility to spin up new services all the way out to the customer’s mobile device (in theory). So whether it’s a cloud bursting from a private to a public cloud, or instantiating a data center instance for disaster recovery (DR), or rolling out voice, wireless, or IoT applications to the small business or home, Pica8 has the right software on the right size hardware for those solutions.

When providers factor in the competing drivers of service differentiation vs. cost vs. scale, it’s a no brainer. In this brave new world, service providers are embracing ‘Outside In’ thinking, and with this, come less Disgust, Anger, Sadness, and Fear. And most definitely, more Joy.

, , , , , ,

Jun
02

With SDN and white box news flying fast and furious through the Internet, it can be hard to keep up with really great articles. Twitter is a great place to monitor breaking White Box and SDN news, but where do you start? This blog presents a list of 45 top White Box SDN Twitter handles you should follow to keep up. The following Tweeters have their fingers on the pulse of White Boxes and SDN. See the list below or follow the whole group at Pica8’s SDN 45.

  1. @bigswitch – Big Switch Networks
  2. @BradCasemore – Brad Casemore
  3. @capveg – Rob Sherwood
  4. @CIMICorp – Tom Nolle
  5. @Cloud_SDN – Cloud SDN
  6. @colin_dixon – Colin Dixon
  7. @craigmatsumoto – Craig Matsumoto
  8. @CumulusNetworks – Cumulus Networks
  9. @DanPittPaloAlto – Dan Pitt
  10. @e_hanselman – Eric Hanselman
  11. @ecbanks – Ethan Banks
  12. @etherealmind – Greg Ferro
  13. @IEEESDN – IEEE SDN
  14. @ioshints – Ivan Pepelnjak
  15. @IPv6Freely – Chris Jones
  16. @jonisick – Joe Onisick
  17. @JRCumulus – JR Rivers
  18. @martin_casado – Martin Casado
  19. @mbushong – Michael Bushong
  20. @mitchwagner – Mitch Wagner
  21. @NetworkedAlex – Alex Walker
  22. @NickLippis – Nick Lippis
  23. @ONLab_ONOS – Open Networking Lab
  24. @ONUG_ – Open Networking User Group
  25. @OpenDaylightSDN – Open Daylight Project
  26. @openflow – Open Networking Foundation
  27. @OpenSourceSDN – Open Source SDN
  28. @opnfv – Open NFV
  29. @Pica8 – Pica8
  30. @Prajaktaplus – Prajakta Joshi
  31. @ProjectONIE – Project ONIE
  32. @rayno – Scott Raynovich
  33. @SDN_GIRL – Rita Younger
  34. @SDNBeckmann – Curt Beckmann
  35. @SDNBlogs – SDN Blogs
  36. @SDNspace – SDN Space
  37. @sdnworld – SDN World News
  38. @sdxcentral – SDx Central
  39. @sdxtech – SDx Tech
  40. @SearchSDN – Search SDN
  41. @sonoble – Steven Noble
  42. @SunayTripathi – Sunay Tripathi
  43. @The_New_IP – The New IP
  44. @vmwarensx – Vmware NSX
  45. @WireRoy – Roy Chua
  46. @fast_lerner – Andrew Lerner
  47. @nfv3sdn – NFV & SDN
  48. @Anthony_Rocca – Anthony Rocca

Apr
17

I have great respect for my previous company, Cisco Systems, and truly believe that the company has successfully brought a disruptive approach of applying network technologies to answer major business challenges.

Working at Cisco was like being conferred with an honorary doctorate from an Ivy League school in engineering, management, leadership and entrepreneurship simultaneously . The experience of working in multiple lines of businesses was helpful in shaping the mindset on how best to manage innovations and productize them so that it was mutually beneficial to the customers and the company. This productization often required an intense validation process, which resulted occasionally in some really cool technology ideas not ever seeing the light of day. Thoughts presented for the rest of this blog are an attempt to share my experience and possibly dispel some myths in the industry.

Myth – One Vendor Can Answer All Networking Requirements

Network vendors for the longest time have enjoyed a monopoly (or duopoly). If an organization had some IT infrastructure requirements, there were a handful of vendors that would satisfy all their needs. This was great for everyone! As a measure of risk mitigation, a famous unwritten policy surfaced that “you would not lose your job if chose vendor C (or A, H, J).”

This is because the network is a special function, requires special skills and the vendors provide the organization with all the knowledge needed to operate their equipment. As customers adopted the unwritten policy and filled vendors’ coffers globally, the vendors faced tremendous pressure to continue to accelerate their business . While early competitors provided more focused solutions, rather than simplifying the network layer, networking increased in complexity due to multi-vendor deployments and operational nuances. This brought the standards body to the fore, which caused further delays in delivering a solution or features. The standards body became a battleground for developers  since every expert had a unique way to solve the problem at hand and a laborious process of converging on an acceptable solution ensued. The standards became so vague that vendor implementations would not even inter-operate. The customer tended to lose out in this battle, since they did not have the control they needed on the infrastructure they owned.

On the other hand, the revenue-generating infrastructure, such as the server and the software layers, were fast evolving. An evolution toward simplification and accelerated application development occurred because the open source community empowered the software developer. With that, the developers built customized, powerful, yet simple software stacks that  tackled some of the most complex issues, such as server scaling, web experience improvement and acceleration, security and many others.

Frameworks quickly emerged that catered to every environment and solved most issues. The hardware differences and related issues quickly evaporated and enabled the organizations to focus and more quickly deliver revenue-generating services with higher success. The open-source community was the major force behind making this transition a resounding success. Powered by a collaborative open-source environment, developers leveraged the Linux operating system to integrate software components and create turnkey systems. LAMP stack, mashups, open APIs were all instrumental in transitioning to next-generation web architectures and services. The influence spread across all segments of business and consumer markets and considerably changed the way business was done, whether that meant the emergence of social networking, Web 2.0 interaction with end-customers, self-service models, and so on.

Clearly, the network layer lacked the agility, evolution and acceleration that the software layer perfected to adapt to the changes in the industry, which prompted an industry-wide question: “Can my software define my network?”

As you might have realized, this is a loaded question. Software defined networking (SDN) is a different end game to different vendors. Traditional vendors view the SDN concept primarily as a network and element management solution or a normalized way to communicate with the software on the equipment. Combining analytics with some auto-configurability and visibility to the network layer creates a sense of control. While this does provide some answers to the question above, it is certainly not complete. Ultimately, the question of networks being defined by software is to gain control over the network layer and customize it such that the business needs are addressed without spending an arm and a leg.

The biggest technical hurdle is to “un-learn” what we know, perceive and understand about networking and re-think how to evolve networks to suit specific needs. This does not mean routing or switching are forgotten; but more importantly, it means make the network an agile and innovative platform that is conducive to rapid application development. Having an API is a critical and significant first step toward creating an open network platform. Further development by leveraging the open-source community is critical in matching the benefits realized by the server and software layers.

For a single vendor to provide for all network requirements is almost impossible since they do not have the expertise of building software stacks (see ecosystem). The ecosystem is essentially aimed at augmenting traditional networks with new capabilities as a first step – nirvana being a state when the software stack is seamlessly able to program network services the same way the software stack programs servers. So notions of “next-generation SDN” and “more than SDN” are really a rush by vendors to adopt SDN rather than realistically enable the underlying intent of enabling the network as a platform.

Clearly, SDN needs much more effort than a marketing gig to re-package decade-old features in a trendy new way. Watch out for those sharks! This is not a race to who provides the best definition of the network as a platform or SDN. Instead, it’s an approach that opens up the network layer to bring in more network control and efficiency in order to answer those critical business challenges.

Apr
16

The Flip Side of Overlays

Why Labeled BGP on White Box Will Disrupt How We Buy Routers

For those of us that are old enough to have or remember a record collection, there is familiarity (and probably a little nostalgia) for the term “flip side.” In this context, flip side refers to the B-side of a standard vinyl record, and refers to secondary recordings or bonus tracks that weren’t as heavily marketed as their A-side counterparts.

Why am I writing about an antiquated music medium? And what does this have to do with networking? I bring this up because it’s an interesting parallel with what’s happening with network overlays – and specifically, how these are viewed from the “flip side,” or in other words, the different points of view from the consumer and the provider.

First off, some background. In the simplest terms, an overlay is a logical network that enables you to create paths and connections on top of (and in many cases, regardless of) the physical connections between the end points. More importantly, overlays are a critical construct because they enable network operators to create more virtual subnets – which in turn support multi-tenancy, VM mobility, and service differentiation.

These are all interesting for many different audiences:

  • For enterprises, they want to be able to leverage their IT efficiently (read: elastic and self service) across a spectrum of on-premise and in-the-cloud services. In this hybrid cloud model, they want to be able to create logical networks, share data and information easily and securely across geographies, and get access to differentiated services when they need them (e.g. traffic engineering, application acceleration, monitoring, and security).
  • For providers, they want all of the same things that the enterprises do, with the ability to monetize, and without any additional burden on their existing IT operations, staff, and budget.

Enter overlay technologies.

One approach that we’ve been hearing about a lot is VXLAN.

A big reason for this is the laundry list of vendors that have backed it – Cisco, Arista, Broadcom, and of course, VMware (based on the capabilities of their NSX controller) just to name a few. One of the reasons VXLAN was introduced was to address the problem of limited logical scale and to create layer 2 adjacencies across different IP networks. It all sounds great – particularly if you have infrastructure that understands VXLAN and can behave as a VXLAN Tunnel End Point (VTEP).

So what’s the flip side? Labeled BGP of course.

For providers, VXLAN is an option, but the downside of this is that it’s a relatively new protocol. It might require new equipment to support VTEP functionality, and it will definitely require education and training on how to build networks with VXLAN.

Combine this with the fact that if you’re a service provider, you’ve been using overlay technologies for decades. You have built up an infrastructure based on MPLS and BGP, and have used these protocols and technologies to develop a rich mix of services within your own networks, and between peer networks, to stitch together the services that your customers need.

A Comparison of VXLAN and Labeled BGP

In this instance, Labeled BGP is a perfectly viable solution. Service providers have extensive experience and tools to solve these problems across the WAN.  They can use MPLS to establish tunnels within and between datacenters, and Labeled BGP as the signaling mechanism to exchange the MPLS labels between BGP peers. Naturally, the providers are going to gravitate to the technology that is more familiar to them. The challenge here is that this feature has traditionally only been made available on higher-end routing platforms – where interfaces, ports, and bandwidth tend to be much more expensive.

Up to this point, white box conversations have centered in the data center – where commodity hardware, merchant silicon, and a growing number of hardware OEMs and ODMs have made it an easy proposition for top of rack switches.

But, that’s just the beginning. As more and more functionality moves to software, the white box model is going to continue to disrupt the networking world in new and interesting ways. Labeled BGP and edge routing is just such one example. To date, Pica8 is the first vendor to offer this functionality on a software license that can be ported onto commodity white box hardware.

This means that providers building MPLS tunnels with protocols like Labeled BGP can do so with greater operational freedom and flexibility. They can deploy hybrid cloud services for their enterprise clients; easily manage the tunnels required for multi-tenant environments, and rapidly deploy new and differentiated services with

a more familiar tool set. One, they don’t need to implement VXLAN to do this – a newer, less familiar protocol that requires (potentially new) hardware VTEPs. And two, they don’t need additional investment in a much more expensive edge routing solution.

At the end of the day, there really isn’t a right or wrong answer to this. An enterprise might choose VXLAN because of what they are doing with VMware NSX or their server VM infrastructure. But a provider might look at the same challenge and come up with a very different solution. Remember, much like a classic record on vinyl, don’t forget to take a listen to the flip side. You never know what types of gems you may find.

Apr
01

White Box Acronym Soup

The LightReading blog, Open Networking Acronym Soup, covers all the interest groups, communities and standards bodies that are driving this idea of Open Networking, which in itself is a grab bag of topics around SDN, NFV and of course white box/bare metal switches. A recent blog post struck a chord with me at first because the author, Marc Cohn, is a good guy and a friend.

But secondly, and more importantly to everyone else, is to point out his astute observation that “we” (people, users and vendors) try to simplify stuff by using acronyms. I agree. In my past job at Infoblox, people always wanted to know what DDI meant, I would reply in my standard excited way “DNS, DHCP and IPAM’’ and most would agree that DDI was easier to say. So let’s take a look at the acronym soup and examine several key factors that you should know about white boxes. And I will lay them out here and try to keep it simple and break the list into two sections, what you should know now, and what you need to keep an eye on…for now.

OCP – Open Compute Project – This is an organization driven by Facebook. The end game is to foster a community that uses all the same tools and tricks to make any switch operating system (OS) operate with any bare metal switch. While certainly a lofty goal, the last OCP event was the best-attended ever with a host of startups and many key players involved, including Dell, HP and Juniper.  The objective is to create a plug-and-play ecosystem, where you buy an OCP switch, and load on an OCP operating system—and bam—it just works.

ONIE – Open Network Install Environment – ONIE is an open source “install environment” that acts as an enhanced boot loader utilizing facilities in a Linux/BusyBox environment and was officially adopted by OCP in 2014. This small Linux OS enables end users and channel partners to install the target network OS as part of data center provisioning in the fashion that servers are provisioned. Most, if not all, of the white box makers are adopting ONIE. You should make sure you have ONIE on board the bare metal switch you buy if you want to try more than one OS.

ASIC – Application Specific Integrated Circuit – Sure, I bet you all know this one….This is one of the key components that makes a switch a switch and different from a CPU-driven server. Switches have CPUs as well of course. The ASIC has the hardware features that drive functionality at scale. For example, you don’t just want a line-rate Gigabit Ethernet port, you also need a line-rate port with wire-speed access control lists (ACLs) or quality of service (QoS) marking functionality, and that functionality is baked into the ASIC.

ZTP – Zero Touch Provisioning – ZTP has been particularly useful for data center servers, where scale and configuration similarities across systems make automation a necessity. In the server world, the Linux-based OS has revolutionized on-boarding and provisioning. Rather than using command-line interfaces (CLI) to configure these systems individually, administrators can use automation tools to roll out the OS software, patches and packages on new servers with a single command, or the click of a mouse. Now you have ZTP on many switch platforms.

WB or BMX.  Yet more acronyms for white box and bare metal switches.

Developments to Watch over the Next Year

ONL – Open Network Linux – ONL was recently adopted by OCP as a standard Linux distribution for all bare metal switches with apparent support from many white box makers. With the rise of cloud and DevOps methodologies, we’re seeing increased interest in network disaggregation. End users, especially organizations where Linux has been widely adopted, can derive significant operational efficiencies by running Linux on a variety of network hardware. Supporters of ONL ensure that the open networking community can easily leverage bare-metal switches and explore the range of benefits that network disaggregation can offer by standardizing on one Linux distribution. I agree; it keeps it simple. ONL is exactly that idea.

ACPI – Advanced Configuration & Power Interface – Derived from the PC industry, this approach is currently being fostered in the OCP and is widely used in the server/desktop/laptop industry. The idea here is that even if you have the hooks to the CPU and the ASIC, you still need to make sure the fans, sensors and lights on the box are functioning as expected after you port a new OS to your device. So there is considerable action behind the scenes to port to a new “box,” even if the OS works on another box with the same exact ASIC and CPU. Advocates maintain that eventually hardware compatibility lists will go away, and when you put an OCP OS on an OCP bare metal switch it consistently works without much fanfare.

SAI – Switch Abstraction Interface – This is a recently initiated OCP project to drive a consistent API framework across different ASICs. Today each ASIC manufacturer has its own API set, which makes it difficult for end users to adopt them in a meaningful way. Of course you don’t want to be an ASIC engineer or have to build your own entire switch, but you may want enough functionality to adjust aspects, such as counters or the packet processing pipeline for unique traffic patterns that are indicative of your environment.

Ok that is a decent list of key acronyms. Share some in the office with your team mates, or impress your friends at your next cocktail hour!  In the meantime, stand by for more blogs to come on white boxes.

Feb
24

Establishing the Big Data Connection

Many network vendors will tell you that their network equipment is built for Big Data. However, once deployed, do you have enough Big Data context to effectively monitor, troubleshoot, triage and tune your network? In most cases the answer is no! When designing and deploying a network, administrators must consider whether this network will provide enough Big Data context?

Before we go any further let’s define BIG DATA context.

BIG DATA context is the ability to correlate Big Data events and protocols back to network events and protocols and to be able to classify BIG DATA network flows correctly. To establish the Big Data Connection, we’re going to discuss the requirements to ensure a network is in the class of networks that have Big Data context, how administrators can possibly achieve this, and the role network programmability and agility play in this discussion.

Now let us see how we can build BIG DATA context and act on it.

Building Big Data Context
Network monitoring, tracing, visibility and reporting with Big Data context is accomplished with network equipment that is able to export flow statistics, counters and flow DBs and leverage open systems to classify such flows using Big Data heuristics. Pica8 can easily export flow statistics with sophisticated match rules and since all of its solutions come prepackaged on Open Platforms using Linux-based Broadcom chipsets, those communities can be leveraged for best-of-breed flow classification applications that can be used on flow data, statistics and counters.

Once we have built Big Data context, it becomes easier to tackle network programmability and agility so that network actions can be more proactive and adaptive and less reactive to Big Data events.

Network Programmability
Network programmability is a much used but often misunderstood term, for example programmability is NOT configuring the network with automation tools as some people think. For organizations running Big Data workloads, Network programmability is the ability to recognize Big Data flows and specify policies at different points in the network to enable:

  • Re-routing of flows
  • Rate-limiting or throttling of flows
  • Movement of flows permanently or temporally via access control

Programming these tasks is easy to do with network controllers, such as the OpenDaylight controllers and Pica 8 switches, which can be deployed in different parts of your network and can quickly provision policies in your network to adapt to and react in real-time to Big Data events, such as replication, node addition and node deletion.

Sample Use Case
If you want to monitor the data flow between two vnodes in a RIAK cluster and move them to a less-preferred path if the data amount goes over a certain threshold (~1GByte in this example). I add some sample code here in Python using a RESTful API to implement this use case.

class ControllerClient:

   default_params = {‘format’:'json’, ‘v’:’1.0′}

# connect to server – on localhost for illustration

   server= ‘http://127.0.0.1:28546′

   def __init__(self,server=”):

       if server != “”:

          self.server = server

  def getRequestURL(self,method,params={}):

        requestURL = self.server+’/'+method

        call_params = self.default_params.copy()

        call_params.update(params)

        requestURL += urllib.urlencode(call_params)

        return requestURL

  def sendAPIRequest(self,method,params={}):

       data = {‘bytes’:0, ‘flow_id’:0}

       url = self.getRequestURL(method,params)

       f = urllib.urlopen(url)

       data = json.load(f)

def main():

    client = ControllerClient()

    data = client.sendAPIRequest(‘network/analytics/flow_counter_get\

                                                          ?switch_id=sw_id \

                                                          &eth_type = 0×800\

                                                          &src_ip=1.2.1.4 \

                                                          &dst_ip=1.2.1.3 \

                                                          &dst_port=5 \

                                                         &dst_vlan=600′)

    if data['bytes'] > 1000000000:

      flow_id = data['flow_id']

#reroute flow

     data = client.sendAPIRequest(‘network/router/setflow\

                                                           flow_id=flow_id\

                                                           &eth_type=0×800 \

                                                           &src_ip=1.2.1.4 \

                                                          &dst_ip=1.2.1.3 \

                                                          &dst_port=6 \

                                                          &dst_vlan=700’)

 if __name__ == “__main__”:

    main()

For the curious there are plenty of online resources to programmatically add/edit flows using controllers, such as OpenDaylight, I add one here for reference     https://github.com/fredhsu/odl-scripts/blob/master/python/addflow/odl-addflow.py

and I quote:

def push_path(path, odlEdges, srcIP, dstIP, baseUrl):

  for i, node in enumerate(path[1:-1]):

    flowName = “fromIP” + srcIP[-1:] + “Po” + str(i)

    ingressEdge = find_edge(odlEdges, shortest_path[i], node)

    egressEdge = find_edge(odlEdges, node, shortest_path[i+2])

    newFlow = build_flow_entry(flowName, ingressEdge, egressEdge, node, srcIP, dstIP)

    switchType = newFlow['node']['@type']

    postUrl = build_flow_url(baseUrl, ‘default’, switchType, node, flowName)

   # post the flow to the controller

    resp, content = post_dict(h, postUrl, newFlow)

def build_flow_entry(flowName, ingressEdge, egressEdge, node, srcIP, dstIP):

 # Alternatively I could add a second flow with 0×806 for ARP then 0×800 for IP

  defaultPriority = “500″

  newFlow = {“installInHw”:”false”}

  ingressPort = ingressEdge['edge']['tailNodeConnector']['@id']

  egressPort = egressEdge['edge']['headNodeConnector']['@id']

  switchType = egressEdge['edge']['headNodeConnector']['node']['@type']

  newFlow.update({“name”:flowName})

  newFlow.update({“node”:ingressEdge['edge']['tailNodeConnector']['node']})

  newFlow.update({“ingressPort”:ingressPort, “priority”:defaultPriority})

  newFlow.update({“nwSrc”:srcIP, “nwDst”:dstIP})

  newFlow.update({“actions”:”OUTPUT=” + egressPort})

  return newFlow

def post_dict(h, url, d):

  resp, content = h.request(

                                           uri = url,

                                           method = ‘POST’,

                                           headers={‘Content-Type’ : ‘application/json’},

                                           body=json.dumps(d),

                                       )

  return resp, content

Pica8 has open systems designed to ease and accelerate network development, deployment, for a new kind of smart programmable, agile network. Customers can leverage an extensible ecosystem that helps them build a network programmability framework along with sample applications to get started. This includes a Linux platform for network control plane and data plane equipment with well-defined APIs to provision, configure and manage all crucial network elements, such as routing, switching and policy.

Network Agility and Elasticity
Maintaining network agility enables a business to seamlessly adapt and react to dynamic Big Data workloads. To accomplish this, these same businesses will deploy network equipment (such as routers, switches and links) with minimal manual intervention. How? Simple. By using routers, switches that have been virtualized/containerized or with hardware routers and switches that can be easily brought up without manual intervention.

With automated network programmability, once new network paths are created, further activation, port upgrades and remote switch provisioning can enable a topology to be changed on the fly to smoothly react to changes in the Big Data ecosystem. One can use network equipment from pre-provisioned HW, container/VM factories with sizes and form factors for each segment of your network—whether it’s ToR, aggregation/core and access. Conversely, old world network vendors who supply closed systems can take years to develop new features.

Network administrators have for years been running static topologies rather than running network  topologies which are context sensitive and hence dynamic. However with SDN and solutions from vendors like Pica8 it need not be the case any more. By building Big Data Context and infusing network programmability, administrators now have the tools needed to maintain agility and resilience and for once be in charge of their own destiny.

, , , ,

Jan
12

Who doesn’t like automation?  If you’re speaking to somebody in IT, then the short answer is “nobody”.

While the term Zero Touch Provisioning (ZTP) might be increasingly more common to networking, the concept of automation has existed for years in IT.  At its core, ZTP is an automation solution that’s designed to reduce errors and save time when an IT administrator needs to bring new infrastructure online.

This is particularly useful for data center servers, where scale and configuration similarities across systems make automation a necessity.  In the server world, the Linux-based operating system has revolutionized on boarding and provisioning.  Rather than using command-line interfaces (CLI) to configure these systems one at a time, administrators can use automation tools to roll out the operating system software, patches, and packages on new servers with a single command, or the click of a mouse.

Advanced scripting capabilities also allow administrators to tailor the boot configuration of these systems with profiles for specific applications.  So for example, if you need ten servers for a new Hadoop cluster, you can load this with one profile, but if you need six new servers for a new web application, you can roll that out using a different profile.

Essentially, automation drastically reduces the amount of time when you take a server out of the box to when it’s functioning in a production environment – all while minimizing the risks of manual configuration errors and missed keystrokes, or the additional challenge of knowing which driver or library is the correct one.

What about the network world?

The basic question here is why should it be any different?  Much like servers, network devices have traditionally been managed via the CLI.  What’s more, network administrators need to do this manually on each individual device.

Consider the typical on boarding and provisioning process of a network switch.  A network switch has traditionally been coupled with a pre-loaded proprietary network operating system.  Technicians must use CLI or the manufacturers own tools to provision a switch.  This can be broken down into three basic steps:

  1. When the new device arrives, it already has an OS to help bootstrap the device.  It is removed from the box and goes to a staging area. Here the administrator checks the operating system version, and makes any updates – for patches, bug fixes, or any new feature updates as necessary.
  2. An initial configuration is made to establish basic network connectivity.  This includes parameters such as administrator and user authentication information, the management IP address and default gateway, basic network services (DHCP, NTP, etc) and enabling the right L2 and L3 network protocols are all examples of the bootstrap process.
  3. Once the initial OS and configuration has been verified, the device can be installed into the environment (racked and cabled), where further customized configuration can be made (either locally via the console or through a remote access protocol) that is specific to the application and location within the network.

On Boarding a New Switch

The details may vary slightly for each environment, but the basics remain the same.  This can be a verytime consuming process.  Now extrapolate this model to ten network switches.  Or twenty.  Or one hundred.  And when you consider that for each individual switch, there’s an opportunity for a configuration error that can bring down the network or create exposure and a security risk, the conclusion is obvious: there has to be a better way.

How does ZTP help with this process for the network?  Remove all the manual configuration and steps listed above, and what you have left is ZTP.  In this model, the network administrator receives the new hardware and the first thing they do is to physically install the device – rack and cable the switch.   Once these physical connections are made, the technician no longer has to touch the box – hence the name, “zero touch”.

With the ZTP system in place, once the switch is powered on, it uses standard network protocols to fetch everything it needs for provisioning.  It can send a DHCP query to get the proper IP address for connectivity and management.  It can then use BootP/TFTP to get the right operating system image.  And then another TFTP request to get the right configuration file based on the application profile.

In this model, once the network administrator sets up the IP address scheme via the DHCP server, and the OS and configuration files on the TFTP server, they can effectively roll out tens, hundreds, and thousands of switches in this way – all fully customizable and without the time consuming and error prone manual configuration process.

Sounds like a no brainer right?  Now juxtapose this with some mega trends that are happening in the data center today.

The first of these is how more and more, the data center is becoming an application-driven economy that is fueling data center growth and virtualization.  Bringing applications to market faster are the key to gaining a competitive advantage.  Therefore, the faster IT teams are able to bring infrastructure online to support these applications, the better.  With ZTP and server virtualization prevalent in the server world, it’s become extremely important to automate the network processes as well.  Ask any network administrator, and they clearly don’t want to be viewed as the long pole in the tent.

The second is bare-metal switching.  If the applications are driving the top line, then it’s the hardware going to help with the bottom line.  Commoditization of network hardware is the next logical evolution, with the rapid adoption of merchant silicon.  More and more customers are seeing less differentiation in the hardware, and more differentiation in the speed, features, and operational simplicity that the software can provide. Today, three manufacturers (Big Switch, Cumulus, and Pica8) are offering Linux-based OSs for bare-metal switches – effectively bringing the efficiency and familiarity of Linux to the network world.

In the context of these trends, it’s even more important to implement ZTP and automation practices into the network.  As more applications come online, IT teams are being taxed to keep the infrastructure up to date – including provisioning, scaling, troubleshooting, and maintenance.  This is not sustainable in any manual based process.

And as hardware and software continues to be decoupled, it’s critical to find a way to automate the new operational model.  If I can purchase hundreds of switches from an OEM or ODM and rack these devices – would you rather install the OS and configure each of these individually, or do this through an efficient methodology using well known, reliable network protocols.

Much like the server world before it, the network world is seeing some significant technology shifts.  Automation, software defined devices, and bare metal switches are all contributing to a fast-paced and dynamic environment in the data center.  With ZTP, the network is leveraging best practices from the server world to drive greater speed and operational efficiency.

In short, it’s become an essential way to automate the network.  Now who wouldn’t like that?

, , , ,

Nov
24

Pica8 Says ‘Yes’ and Challenges the FUD

Up to this point, OpenFlow has mostly been deployed in research and higher-education environments.  These early trials have shed some light on interesting use cases, what OpenFlow is good for, and of course, what OpenFlow might not be so good for.

This is important because OpenFlow and SDN adoption is only going to grow.  It’s imperative that we understand these limitations – specifically, what’s real and what’s FUD.

One of these is scale.

If you’ve kicked the tires on OpenFlow, one question you may have heard is “How many flows does that switch support?”  However, this question is only part of the story.  It’s like asking only about a car’s top speed when what you should be thinking other things too – such as fuel efficiency and maintenance.  So to figure out the right questions, we first need to go over a bit of background.

In its most basic terms, any network traffic, whether it’s Layer 2, Layer 3, or something else, is governed by a of forwarding rules as defined by a series of protocols.  If it’s this MAC, do this.  If it’s that IP, go there.  Each of these “rules” is stored on the switch, in memory, in something called the Forwarding Information Base (FIB) or Router Information Base (RIB).

OpenFlow is a little different.  It allows for more point-to-point connections that you can base on business logic “rules”..  These rules, or flows, are implemented in a different way.  If I see packet type x, perform action y.  It doesn’t have to follow the OSI networking model, and as such, gives users added flexibility to govern how traffic behaves. This works great for policy-based networking and driving business logic into the network.  But at its heart, it’s another networking protocol and the basic concept is the same.  The key difference with this approach is that these flows are not storied in say the FIB, but in the switch Ternary Content Addressable Memory (TCAM).

Now here is where things get interesting.

Let’s look at the switch with the Broadcom Trident II ASIC. Pretty much every major switch vendor has a switch with this ASIC including us with both pre-loaded and bare metal switch options through our Hardware Ecosystem (link).  Trident II enables you to store up to two thousand entries in the TCAM.  Most every other switch vendor has used this data point to drive the perception that OpenFlow will not scale.

Well, we at Pica8 agree – to an extent.  Two thousand flows are not enough – if you have 400 network nodes and each of those speaks five other nodes, that already maxes out your TCAM.  So what did Pica8 do to solve this?  Two things:

  1. First, we made the TCAM table much more efficient.  Instead of treating this as a hard limit of two thousand flow entries, we chose to slice and dice that memory into smaller chunks.  Each flow entry is designed for a packet header, but in many cases, you don’t need to inspect the entire header to determine the right action.  Having 3 smaller tables can go a long way – one table for port, another for MAC, and another for signature can increase the total number of rules.  In some cases you might just need to match the port? Or the destination IP?  Or the MAC?  Or a combination of the above?  Think of this as a 2000 page book – with each page having just enough room for one packet header.  But if your flow doesn’t need to match the entire header, you’ll have lots of whitespace.  We’ve filled up every page to the margin.   In addition to that, we implemented the usage of wildcards, aggregation, de-duplication and other techniques to optimize the table.  With all these enhancements, we’ve managed to effectively double the capacity of flows in the TCAM.  But that’s still not enough to make an appreciable difference right?  So…
  2. Second, we attached the FIB table to the TCAM.  This is a capability that we have leveraged on the Trident II ASIC.  In this way, we’ve vastly expanded the number of entries that use a standard IP longest prefix match algorithm, while also freeing up even more space in the TCAM by eliminating the need for IP lookups.

Both of these innovations contribute to an OpenFlow implementation that supports over two hundred THOUSAND flows – all on the exact same hardware that the other guys use.  And this number makes a lot more sense as we expect more customers to roll out larger OpenFlow networks into production.

So, when you want to give OpenFlow a try – make sure you ask the right questions about the limitations you’ve been presented with.  You might be surprised at the answer.  To learn more about SDN scaling and Pica8, send us a note at sales@pica8.com. We would love to hear your comments.

, , , ,

Nov
10

CrossFlow Networking

When Worlds Colliding is Not Such a Bad Thing

If you’re a fan of the 90’s sitcom Seinfeld, you’re undoubtedly familiar with more than a few Seinfeld-isms – terms originated from the show that have made their way into our daily vernacular.  One such term, “worlds colliding” describes a theory in which it’s best to keep your different worlds (as defined by social spheres, e.g. friends, family, colleagues, etc.) separate.


How does this relate to networking you ask?

Well let’s look at one world – Layer-2/Layer-3 networks.  These are the networks that people have been building for decades.  They consist of switches and routers and leverage protocols and technologies that networking gurus are familiar with such as Ethernet, VLANs, trunking, BGP, OSPF and more.   These protocols govern how traffic is forwarded and are built upon the 7-layer OSI model.  And because this model is (relatively) mature, there’s an inherent reliability, and a clear understanding of how these networks are built, how they work, and how they are maintained.

Then there’s the second world – the world of SDN and in this example, OpenFlow.  With OpenFlow, you can do some interesting things, such as using a centralized controller to create rules and policies that dictate where traffic needs to go.  In theory, this approach is more flexible and dynamic, and enables users the ability to drive business logic into the network.  If you want to trigger some traffic monitoring, network tapping, or bandwidth calendar based upon users, times, or geographies, you can do that with OpenFlow.

”"

The problem today is that these worlds remain separate.  And this creates added costs and complexity for users because of the necessity to build, operate, maintain, and troubleshoot separate networks.  Wouldn’t it be better if you could have the flexibility of OpenFlow for policy-based networking with the efficiency of Layer-2/Layer-3 for traffic forwarding?  Enter CrossFlow Networking.

CrossFlow Networking is a unique capability delivered on PicOS 2.4.  It allows these worlds to “collide” (in a good way).  With CrossFlow, users can selectively integrate OpenFlow into certain parts of their network for specific applications, while maintaining the efficiency and performance of the tried and true Layer-2/Layer-3 protocols.

How does this work?  OpenFlow allows users to stitch in a unique path for a specific application.  We do this by allowing OpenFlow to fine tune or override the switching (FIB) or router (RIB) tables in the switch.  These tables are “wired” by how Layer-2 and Layer-3 protocols converge to a best path for the traffic.  In some cases, that path may not be ideal for the application (for example, you may want a specific application to access data or a network service that resides somewhere else in the network).  One possible solution would be to adjust the switching and routing topology to get the desired behavior, but that takes time and is disruptive. CrossFlow Networking solves this by allowing an OpenFlow rule to trigger specific behavior, and then modifying the packet appropriately to use the existing FIB and RIB tables.  This gives users granular control to allow a specific policy to change behavior, without disrupting the topology of the existing network.

”"

Ultimately, CrossFlow Networking simplifies the process of integrating SDN into today’s networks.  It bridges the operational gap between traditional networking and SDN, while also reducing CapEx for customers.

To borrow one more Seinfeld-ism, with CrossFlow, network operators can achieve a little bit more “Serenity Now”.

To learn more about CrossFlow Networking and Pica8, send us a note at sales@pica8.com. We would love to hear your comments.

 

 

, , , , , ,