Artificial intelligence (AI) applications are quickly finding their way into everyday life – whether it’s traffic data for Waze maps, sensor data from self-driving cars, or Netflix entertainment recommendations. All of these apps generate extreme volumes of data that must be collected and processed in real time. Networks built as recently as 10 years ago weren’t required to collect, route, and process this vast amount of data at real-time speeds. Typical networks contained a web of hardware and cabling, a one-size-fits-all offering of bandwidth and throughput that was far too cumbersome to handle today’s AI and machine learning applications.

Not surprisingly, modern networking is based on a very different design. Innovations in software-defined technologies allow for scale-out infrastructures that can be added incrementally to meet business needs. Let’s take a look at why and how.

AI: Not Your Grandfather’s Workload

Before AI and machine-learning applications became commonplace, most network traffic only needed to carry application workloads such as SQL and other structured databases and office applications. Businesses handled data processing — the compute component — on-premises in data centers. Through a mix of disk and tape libraries, they managed storage both onsite and offsite. Companies gleaned business intelligence by funneling data into a data warehouse and running a batch analysis or data mining program on the entire data set. This process resulted in static data, time-consuming analysis, and information that was outdated the moment it was published.

Now, businesses process “Big Data” using parallel processing technologies such as Hadoop. Hadoop is one of the first examples of a scale-out application. Compute capacity is available on-the-fly, and it can be handled on-premises, rented from the public cloud, or processed in a hybrid of the two environments.

Data for AI and machine learning is a specialized segment of Big Data. It’s extremely data intensive and often requires real-time transit and processing speeds. Think of internet-of-things (IoT) sensors that collect dozens of data points per minute. Real-time analysis catches any situation that meets or exceeds a threshold and transmits anomalies for immediate action. To handle the speeds and unpredictable data volumes, the processing is generally routed to a hybrid cloud environment that is set up to support the need for agility and shared resources. The communication, or networking, between servers and storage is mission critical for many of these AI and machine learning applications.

Scale-Out, Hyperconverged Capabilities Extend Beyond the Compute Layer

Until now, scale-out technology innovation happened more slowly on the storage and networking sides than the compute side, but storage has been catching up. Converged and hyperconverged storage systems unify data processing and data accessibility. HPE SimpliVity powered by Intel® is a great example of simply scalable technology that converges compute, storage, firmware, hypervisor, and data virtualization software into a single, integrated node. The HPE hyperconverged solution also offers deduplication, compression, and data protection all in one system.

Now it’s networking’s turn. While hyperconverged architecture pushed the envelope for both servers and storage, networking lagged behind its infrastructure brethren, waiting for an opportune moment in technology innovation: the intersection of workload-aware software and software-defined infrastructure.

Through tight software integration, HPE Composable Fabric becomes aware of its HPE SimpliVity hyperconverged environment and automates many routine network configuration and management tasks. For example, the software-defined network automatically discovers hyperconverged nodes, virtual controllers, and hypervisor guest VMs, and can dynamically provision the network fabric in response to real-time compute and storage events, such as the addition of a new node. The highly adaptable data center network fabric delivers high performance and service quality for diverse applications and workloads while making better use of network capacity.

Destination: https://community.hpe.com/t5/Shifting-to-Software-Defined/Artificial-Intelligence-and-Machine-Learning-Require-a-Better/ba-p/7021559#.W8-LhBNKhQI