[SCRAP] 2017: The year of widespread SDN adoption and DDoS attack mitigation

Earlier this year, IDC published a study predicting a 53.9% CAGR (compound annual growth rate) for the SDN market from 2014 to 2020, by which point it will be valued at $12.5 billion. If you interpolate from the numbers given and the CAGR, you’ll find that the current valuation of the SDN market is roughly $0.61b. $610m is nothing to scoff at, but it suggests that SDN deployments are still fairly uncommon. That will change in 2017.

A universal SDN protocol or standard will accelerate adoption

As SDN proliferates, more potential standards will present themselves, but we could all save a lot of time if we could agree on an interoperable protocol early on. That agreement will come in 2017 and be the catalyst for the rate of growth IDC has predicted. The merger of the Open Networking Foundation and ON.Lab (under the ONF name) makes ONF a candidate to be the standard, but there’s some competition, as well.

ECOMP is an open source project from AT&T that the company believes will rapidly accelerate innovation in the SDN space. AT&T is tapping the Linux Foundation to help with the structure of the initiative and has implied that anyone using ECOMP won’t be limited to AT&T for support. If the project catches on, it could end up as the standard for SDN. 2017 will be the year where we witness the SDN community get behind a single protocol or standard, allowing for dramatic growth in the years to follow.

The concrete, fully-deployed SDN use case we’ve been waiting for will appear

Everyone knows that SDN works, but it’s the lack of a concrete, fully-deployed use case that’s impeding its proliferation. Companies are waiting for a real-life deployment they can point to as a success. In 2017 SDN will move from PoC (proof of concept) trials to having notable commercial use cases that will accelerate adoption. Furthermore, a testing environment will reveal that SDN quantifiably reduces costs and increases revenue, which will spur adoption on a commercial scale, widening the scope of PoC trials as well as inspiring some cutting edge companies to dive into full adoption.

Network operators are having trouble rolling out SDN services at sale, but there’s a new alliance on the prowl that offers global telecommunications services to other carriers. Ngena is an alliance of four major operators — CenturyLink, Deutsche Telekom, Reliance, and SK Telecom — whose global network might push SDN into the quickly-deployed, immediately-profitable territory it needs to be in. Ngena offers a faster way to roll out SDN services and lets gigantic companies purchase all their telecommunications services from a single source. The appearance of this alliance might be the final strike to break the barrier impeding widespread adoption.

DDoS attacks will be increasingly focused on DNS vendors

According to researchers with NexusGuard, there was an 83% increase in DDoS attacks in the second quarter of 2016 compared to the first quarter. That trend will continue and since DNS is gaining favor as a primary attack target, DDoS attacks will increasingly target DNS vendors. Look for a serious uptick in DDoS attacks against DNS vendors in 2017.

Hackers use weaknesses in DNS itself to build botnets like the Mirai botnet. An effective way to hide malware behavior is to compromise DNS, so directly targeting a DNS vendor makes sense. Compromising a DNS vendor also allows the attacker to affect tons of properties. The attack against Dyn, for example, meant that Twitter, Netflix, Slack, and dozens of other services were taken offline. Companies need to take a look at the properties that are crucial to their business and start demanding security improvements.

Recent DDoS attacks were a warmup for the 2017 global DDoS attack

In the first half of 2016, there were 274 attacks over 100 Gbps. Compare that to 223 attacks of that size in all of 2015. As for attacks over 200 Gbps – there were 46 in the first half of 2016 compared to 16 in all of 2015. The average DDoS attack increased 30% in the same time period. A 1 Gbps DDoS attack is enough to take most organizations offline completely.

With those numbers in mind, it’s clear that DDoS attacks are ramping up for something big, perhaps an attempt to take the entire public internet offline. Efforts to protect DNS vendors are in the works, but don’t measure up favorably to the problem, so far. DNS security won’t be strong enough by the time the global DDoS attack hits in 2017.

Security companies will use SDN to secure networks after severe botnet attack

Security is an ongoing concern for network managers and SDN can be used to make networks more compartmentalized and centrally manageable. SDN firewalls have the ability to see and filter internal traffic and the firewall policy can be defined centrally, allowing for better visibility into — and control over — the network. Micro-segmentation of the network through SDN allows portions of the network to be automatically isolated if certain red flags are raised.

Endpoints are ubiquitous, now. Instead of trying to protect a nonexistent perimeter with a traditional firewall setup, network managers need to reconsider their approach by using SDN. With SDN they can define a particular set of behaviors for each application. Expect to see more of this approach after the global DDoS attack that will happen in 2017.

This article is published as part of the IDG Contributor Network. Want to Join?


Resource: http://www.infoworld.com/article/3156344/internet/2017-widespread-sdn-adoption-and-ddos-attack-mitigation.html

Network Management Introduction (Second)

By obtaining reliability from failure analysis and prevention through the network performance and traffic analysis, it can be used as basic information when improving network process speed, adjust network bandwidth and planning a network expansion plan.


Main Features


1. Support various traffic collection targets and traffic tracking management

It supports many traffic collection targets from Flow-supporting network equipment to all versions of SNMP, various Flow, etc. It maintains a smooth network state during important working hours by reducing unneeded bandwidth build-up costs through unneeded traffics and a 5-step in-depth traffic tracing function.



2. Traffic management as Dos/DDos, etc and detecting abnormal flow

List-viewing and distribution-viewing are supported for abnormal traffics and target and types of abnormal traffic, Victim, Flow, Packet, Byte and time can be verified from the list, and based on this, you can plan for the detailed re-engineering plan.



3. Traffic pattern analysis per IP and services

You can analyze Top N, Byte, BPS, PPS analysis and history status per Interface and IP address transmitted, and can analyze Top N and history, increase/decrease comparison per service.



4. Detailed analysis on protocol and interrelationship traffics

You can analyze Top N, Byte, Packet, BPS, PPS analysis and history status per Interface/Protocols transmitted and per interrelationship. And you can analyze Top N and history, compare per protocol.



5. End to End detailed analysis per traffic

Traffic End to End analysis can obtain reliability by analyzing causes for failure and prevention and can be used as basic data in improving network processing speed, adjusting network bandwidth (provides basic data in setting QoS), and when establishing a network expansion plan.



※ Please contact me by email if you want to get more information

The following news will be updated in June

Thank you

Let’s find about the OpenFlow

1) Overview

The OpenFlow architecture consists of three basic concepts. (1) The network is built up by OpenFlow-compliant switches that compose the data plane; (2) the control plane consists of one or more OpenFlow controllers; (3) a secure control channel connects the switches with the control plane. In the following, we discuss OpenFlow switches and controllers and the interactions among them.

An OpenFlow-compliant switch is a basic forwarding device that forwards packets according to its flow table. This table holds a set of flow table entries, each of which consists of match fields, counters and instructions, as illustrated in Figure 2. Flow table entries are also called flow rules or flow entries.

2) OpenFlow 1.0 (First Version)

The OpenFlow 1.0 specification was released in December, 2009. As of this writing, it is the most commonly deployed version of OpenFlow. Ethernet and IP packets can be matched based on the source and destination address. In addition, Ethernet-type and VLAN fields can be matched for Ethernet, the differentiated services (DS) and Explicit Congestion Notification (ECN) fields, and the protocol field can be matched for IP. Moreover, matching on TCP or UDP source and destination port numbers is possible.

Figure 3 illustrates the packet handling mechanism of OpenFlow 1.0 as described in Section 3.1. The OpenFlow standard exactly specifies the packet parsing and matching algorithm. The packet matching algorithm starts with a comparison of the Ethernet and VLAN fields and continues if necessary with IP header fields. If the IP type signals TCP or UDP, the corresponding transport layer header fields are considered.

Several actions can be set per flow. The most important action is the forwarding action. This action forwards the packet to a specific port or floods it to all ports. In addition, the controller can instruct the switch to encapsulate all packets of a flow and send them to the controller. An action to drop packets is also available. This action enables the implementation of network access control with OpenFlow. Another action allows modifying the header fields of the packet, e.g., modification of the VLAN tag, IP source, destination addresses, etc.

Statistics can be gathered using various counters in the switch. They may be queried by the controller. It can query table statistics that contain the number of active entries and processed packets. Statistics about flows are stored per flow inside the flow table entries. In addition, statistics per port and per queue are also available.

OpenFlow 1.0 provides basic quality of service (QoS) support using queues, and OpenFlow 1.0 only supports minimum rate queues. An OpenFlow-compliant switch can contain one ore more queues, and each queue is attached to a port. An OpenFlow controller can query the information about queues of a Future Internet 2014, 6 309 switch.

3) OpenFlow 1.4 (Newest Version)

OpenFlow 1.4 was released in October 2013. The ONF improved the support for the OpenFlow Extensible Match (OXM). TLV structures for ports, tables and queues are added to the protocol, and hard-coded parts from earlier specifications are now replaced by the new TLV structures. The configuration of optical ports is now possible. In addition, controllers can send control messages in a single message bundle to switches. Minor improvements of group tables, flow eviction on full tables and monitoring features are also included.

Resource: Future Internet (Wolfgang Braun and Michael Menth)

The Knowledge Network Plane: Five Reasoning

  1. Ontology

An ontology not only allows for syntax and semantics of the information, but enables or constrains the scope of reasoning that can be performed on the entities defined by the ontology. An ontology in the KP must support extensibility, locally independent definition, some reasonable amount of convergence, and global discovery when needed. Our current ontology language of choice is OWL although it is not the only possibility.


  1. Function library and definitions

There are two reasons that a library or catalog of network management tool definitions and implementations is important to the architecture. First, because the management target is any part of the broad network where management is desired, tools may be needed in a wide variety of locations. Perhaps even more importantly, in order for the KP to improve and evolve, it will be important to incorporate new tools with new capabilities into existing toolkits. This will require both a definition of each tool and implementations. If each inclusion of a new capability needs to be handled through manual intervention, improved and evolving behaviors are unlikely to succeed


  1. Probabilistic programming

The significant majority of computation that will occur in the KP will be statistical or probabilistic. Lee in his thesis took a preliminary step in specifying probabilistic knowledge. Beverly in his thesis concentrated exclusively on statistical analysis of network information, because essentially all information that is collected from measuring and monitoring is sampled, incomplete, only partially accessible, intentionally incorrect, or some combination of these. The information as a whole is statistical.


  1. Agent system

Probabilistic programming: The significant majority of computation that will occur in the KP will be statistical or probabilistic. Lee in his thesis took a preliminary step in specifying probabilistic knowledge. Beverly in his thesis concentrated exclusively on statistical analysis of network information, because essentially all information that is collected from measuring and monitoring is sampled, incomplete, only partially accessible, intentionally incorrect, or some combination of these. The information as a whole is statistical.


  1. Reasoning organization framework

One of the core challenges in a design that requires as much distribution, coordination, extensibility, and policy control as the KP is that the questions of (1) how to decompose functionality in order to distribute it, (2) how to re-organize functionality under changing conditions, and (3) how to understand the effectiveness of an organization or re-organization of functionality, will require at least automated assistance for the human programmer or network manager. We suspect that the integration across many factors, locations, and policies is not something that human intelligence is best suited for. The humans will definitely be the sources of policy definitions and choices. They may oversee and supervise the organization and


Resource: MIT Computer Science and Artificial Intelligence Laboratory


Due to the inflexible closed software installed in today’s common home gateways, it is extremely challenging to introduce new features into the home network. SDN makes it much easier to introduce new functions in these environments. SDN can introduce new functions for home network traffic management. It is possible to combine BISmark’s measurement data and Procera to build a management system that reacts to various conditions of the home network.

For example, one possible direction is to perform reactive traffic shaping at the home gateway based on performance measurement results of the home network.
Another example is proactively prefetching and caching content from the Internet into the home gateway, even before the last mile.

Following the SDN paradigm, enabling a central controller to make various kinds of traffic engineering decisions and pushing rules to home gateways to enforce such policy greatly increases the flexibility of home network management.

※ Network Controller Architecture
Procera Architecture

Resource: Improving Network Management with Software Defined Networking(Hyojoon Kim and Nick Feamster, Georgia Institute of Technology)


What is A Virtualization?

SDN decouples the software that controls the network from the underlying forwarding elements. But it does not decouple the forwarding logic from the underlying physical network topology. This means that a program that implements shortest-path routing must maintain a complete representation of the topology and it must recompute paths whenever the topology changes. To address this issue, some SDN controllers now provide primitives for writing applications in terms of virtual network elements. Decoupling programs from topology also creates opportunities for making SDN applications more scalable and fault tolerant.

SDN Virtualization

1) Access control

Access control is typically implemented by encoding information such as MAC or IP addresses into configurations. Unfortunately this means that topology changes such as a host moving from one location to another can undermine security. If access control lists are instead configured in terms of a virtual switch that is connected to each host, then the policy remains stable even if the topology changes


2) Multi-tenant datacenter

In datacenters, one often wants to allow multiple tenants to impose different policies on devices in a shared physical network. However, overlapping addresses and services (Ethernet vs. IP) lead to complicated forwarding tables, and it is hard to guarantee that traffic generated by one tenant will be isolated from other tenants. Using virtual switches, each tenant can be provided with a virtual network that they can configure however they like without interfering with other tenants.


3) Scale-out router

In large networks, it can be necessary to make a collection of physical switches behave like a single logical switch. For example, a large set of lowcost commodity switches could be assembled into a single carrier-grade router. Besides simplifying the forwarding logic for individual applications, this approach can also be used to obtain scalability—because such a router only exists at the logical level, it can be dynamically augmented with additional physical switches as needed.


4) Virtualization Abstractions.

To define a virtual network, the programmer specifies a mapping between the elements in the logical network and the elements in the physical network. For example, to create a single “big switch” out of an arbitrary topology, they would map all of the switches in the physical network onto the single virtual switch and hide all internal links


5) Virtualization Mechanisms.

Virtualization abstractions are easy to describe, but their implementations are far from simple. Platforms such as NSX are based on a controller hypervisor that maps events and control messages at the logical down to the physical level, and vice versa. To streamline the bookkeeping needed to implement virtualization, most platforms stamp incoming packets with a tag (e.g., a VLAN tag or MPLS label) that explicitly associates it with one or more virtual networks.



Current implementations of SDN virtualization provide the same programming interface at the logical and physical levels, eliding resources such as link capacities, queues, and local switch capacity. Another question is how to combine virtualization with other abstractions such as consistent updates.

Resource: Abstractions for Software-Defined Networks


Network Management Introduction (First)

Through a systemic management that collects and analyzes network operational information in real-time improve processing speed and record/manage failure information to recover rapidly during failures and establish the failure prevention system.


Main Features

1. Prevent failures by managing the performance of network equipment

You can detect signs that happens preceding failures by identifying the comprehensive operation status by monitoring in real-time traffic-related information as traffic usage, usage rate and responding time and performance related information as CPU, memory, buffer usage, etc. And you can also verify various data through the MIB information as defined by user, depending on the purpose of management.


2. Provide traffic history information per node/interface

You can check traffic flows per node or port and collected data can be also checked per day/week/month/year and per specific period. By analyzing data over a long period of time, you can check the network usage trends and can find the cause for the bottleneck that happens during a specific time.


3. Predicting traffics for specific equipment per time

Managers can analyze how the entire traffic of network is changing and what equipment/interface is causing a heavy traffic at a specific time to overload the network, and furthermore you can analyze the traffic situation per packet sizes.


4. Predicting traffics for specific equipment per time

Managers can analyze how the entire traffic of network is changing and what equipment/interface is causing a heavy traffic at a specific time to overload the network, and furthermore you can analyze the traffic situation per packet sizes.


※ Please contact me by email if you want to get more information

The following news will be updated in May

Thank you

[SCRAP] Software-defined networking: how to examine and determine how likely is it that a Software-defined networking customer would recommend our company to a friend or colleague

About Software-defined networking: Software-defined networking numbers show that in the Software-Defined Storage professional areas there is vibrant interest in Software-defined networking: – Interest and popularity, 100 is peak interest: 17 – Employment demand – current open vacancies asking for this qualification: 595 – Active Practitioners, current number of Software-defined networking professionals active: 13158 – Patents […]

via Software-defined networking: how to examine and determine how likely is it that a Software-defined networking customer would recommend our company to a friend or colleague — Autoscaling

[SCRAPE] Report: Evolving SDN: Tackling challenges for web-scale deployments

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events. Evolving SDN: […]

via Report: Evolving SDN: Tackling challenges for web-scale deployments — Grouvy Today

Introduction of WITO Service.

1. Service Capacity

1-1 Retain the best Man power of the IT integrated operation management field

1-2 Establishment and operation of the best integrated operation management system in kore

1-3 Performance and management methodology by verified projects (WatchAll methodology)

1-4 Operation and management know-how on a large scale


2. WITO Service (Watchtek Information Technology Outsourcing)

2-1 IT Service management

– Establishment and application of ITIL-based standard operation procedure
– Improvement of the service quality through setting SLA indicator and improving continuously
– A stable operation support and management of service historythrough operation of a service desk

2-2 Integrated operation management

– Operation, maintenance, failure handling, history management, support operation personnel of the system and network
– Perform preemptive failure handling with regular and irregular preventive inspections
– Improve operation reliability through standardized management process and methodology

2-3 Integrated control(monitoring)

– Perform integrated monitoring with best products as SMS, NMS and FMS, etc
– Performance and storage management, regular inspection and report through automation systems
– Analysis and improvement through the accumulation of operation data




※ Please contact me by email if you want to get more information


The following news will be updated in April.

Thank you