Google is known to fiercely guard its data center secrets, but not Facebook. The social media giant has released two significant tools it uses internally to operate its massive social network as open-source code.The company has released Katran, the load balancer that keeps the company data centers from overloading, as open source under the GNU General Public License v2.0 and available from GitHub. In addition to Katran, the company is offering details on its Zero Touch Provisioning tool, which it uses to help engineers automate much of the work required to build its backbone networks.To read this article in full, please click here
Applications have become a key driver of revenue, rather than their previous role as merely a tool to support the business process. What acts as the heart for all applications is the network providing the connection points. Due to the new, critical importance of the application layer, IT professionals are looking for ways to improve the architecture of their network.A new era of campus network design is required, one that enforces policy-based automation from the edge of the network to public and private clouds using an intent-based paradigm. To read this article in full, please click here
The race to automate
An autonomous network was once seen as part of a utopian ideal, albeit one far off into the future. It would make up the backbone of everything we did, managing a hyper-connected world in which everything from the minutiae of knowing when the milk in the fridge needed replacing, to the ability of the network to automatically ramp services up or down, without the need for human intervention.In a recent ACG Research survey of network service providers, internet content providers, cloud service providers and large enterprises, 100 percent of respondents said they felt the need to pursue automation, and 100 percent are optimistic about automation’s future. Additionally, 75 percent of respondents indicated that they’ll have full or significant network automation within the next five years.To read this article in full, please click here
ORLANDO – Cisco made a bold move this week to broaden the use of its DNA Center by opening up the network controller, assurance, automation and analytics system to the community of developers looking to take the next step in network programming.Introduced last summer as the heart of its Intent Based Networking initiative, Cisco DNA Center features automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks.[ Now see What is quantum computing [and why enterprises should care.]
David Goeckeler, executive vice president and general manager of networking and security at Cisco told the Cisco Live customer audience here that DNA Center’s new open platform capabilities mean all its powerful, networkwide automation and assurance tools are available to partners and customers. New applications can use the programmable network for better performance, security and business insights, he said.To read this article in full, please click here
Although much of the initial excitement for blockchain technology centered around bitcoin and financial services, it’s quickly showing applicability to other areas for increased business value.For example, Walmart has invested in blockchain to improve its supply chain operations. In initial tests, the retailer says the technology reduced the time it takes to track food as it moves from farms to stores — from six days to two seconds.That’s all due to the decentralized nature of blockchain. It’s often referred to as a distributed consensus model, consisting of nodes or blocks of encrypted information. Each node contains the exact same data and transaction history, and that information is secured with cryptographed hashes and digital signatures.To read this article in full, please click here
In campus networking, there are a number of emerging trends impacting the way networks will be modeled in the future. These arising trends include mobility, Internet of Things (IoT), and uniformed security across the wired and wireless connections.To be in tune with these trends, a new era of networking is required that enforces policy-based automation from the edge of the network to public and private clouds using an intent-based paradigm. An example of such would be SD-Access.To read this article in full, please click here
Cisco’s annual user event, Cisco Live, is being held in Orlando, Florida, this week. While Orlando is home to DisneyWorld, Universal Studios and other places where fantasies come true, the one thing that isn’t make-believe is the turnaround of Cisco since Chuck Robbins took over as CEO. When the baton was passed to Robbins in August of 2015, Cisco’s stock was trading at about $25/share and had been moving sideways for years. Today, it’s trading at about $45/share and at a 17-year high, and the turnaround is well underway.Cisco goes back to the network
How did Robbins get Cisco’s mojo’s back in such a short period of time? The answer lies in its roots and a refocus on the network. In fact, when Robbins took over as CEO, I wrote a post outlining some priorities for him as he stepped into the role. My first point was to approach IT through the lens of the network. In the years leading up to the transition to Robbins, I felt Cisco had tried too hard to prove itself as a server and traditional IT vendor instead of staying true to networking.To read this article in full, please click here
There’s no doubt about it – today’s workers have fully embraced the trend toward remote working. In fact, according to last year’s Gallup “State of the American Workplace” survey, roughly 43 percent of employees report they have worked remotely. It would seem that the genie is out of the bottle, and it’s not likely to go back in without a fight.This mass migration off premises changes the dynamic between users and IT help desk teams. An operator can no longer run down the hall to ask a user “Can you show me what the problem is with your computer?” More importantly, without having total visibility in the cloud, the operator may be completely unable to ‘see’ any problems that users are experiencing as they work remotely.To read this article in full, please click here
Almost every week we speak with an enterprise that is curious about building its own IoT application enablement platform (AEP) or IoT device management (DM) platform. The idea is straight-forward – an enterprise wants total control over the technology it deploys, so it chooses to hire developers to build the perfect, inexpensive platform.Then what happens? Sometimes everything goes exceedingly well and the IoT platform delivers as anticipated. Other times, the enterprise determines
it takes more time and money to build a platform that anticipated
it takes more staff to support the platform on-going than anticipated
it is very hard to keep the platform features up-to-date compared to the features offered from best-in-class vendors’ IoT platforms
the initial in-house platform was great, but scaling and modifying the platform to meet future requirements is exceedingly difficult due to the chosen platform architecture
So, what are the total enterprise costs of building an enterprise-grade IoT platform versus buying IoT platform services from a third-party AEP or DM vendor? It really depends on the IoT solution that the enterprise wants to deploy.To read this article in full, please click here
ORLANDO – Cisco’s developer program, DevNet, is on a hot streak.Speaking at Cisco Live 2018, DevNet CTO Susie Wee said the group, which was founded in 2014, now has 500,000 registered members."That’s a pretty cool milestone, but what does it mean? It means that we've hit critical mass with a developer community who can program the network," Wee said. "Our 500,000 strong community is writing code that can be leveraged and shared by others. DevNet is creating a network innovation ecosystem that will be the hub of the next generation of applications and the next generation of business."At Cisco Live the company also announced it has expanded the DevNet world to include:To read this article in full, please click here
Red Hat just announced its role in bringing a top scientific supercomputer into service in the U.S. Named “Summit” and housed at the Department of Energy’s OAK Ridge National Labs, this system with its 4,608 IBM compute servers is running — you guessed it — Red Hat Enterprise Linux.The Summit collaborators
With IBM providing its POWER9 processors, Nvidia contributing its Volta V100 GPUs, Mellanox bringing its Infiniband into play, and Red Hat supplying Red Hat Enterprise OS, the level of inter-vendor collaboration has reached something of an all-time high and an amazing new supercomputer is now ready for business.To read this article in full, please click here
The team designing Oak Ridge National Laboratory's new Summit supercomputer correctly predicted the rise of data-centric computing – but its builders couldn't forecast how bad weather would disrupt the delivery of key components.Nevertheless, almost four years after IBM won the contract to build it, Summit is up and running on schedule. Jack Wells, Director of Science for Oak Ridge Leadership Computing Facility (OLCF), expects the 200-petaflop machine to be fully operational by early next year.[ Now see who's developing quantum computers.]
"It's the world's most powerful and largest supercomputer for science," he said.To read this article in full, please click here
There are a number of ways to compare files and directories on Linux systems. The diff, colordiff, and wdiff commands are just a sampling of commands that you're likely to run into. Another is comm. The command (think "common") lets you compare files in side-by-side columns the contents of individual files.Where diff gives you a display like this showing the lines that are different and the location of the differences, comm offers some different options with a focus on common content. Let's look at the default output and then some other features.Here's some diff output — displaying the lines that are different in the two files and using < and > signs to indicate which file each line came from.To read this article in full, please click here
Before autonomous data correction software met the mainframe, a day in my life as a DBA looked like this:2 a.m. – Diagnose a critical maintenance utility failure for a panicked night operator, re-submit the REORG job, and head back to bed.8 a.m. – Leverage a database tool to pull pertinent data for an emergency report on an internal customer’s sales region.9 a.m. – Use various database tools and review performance-related data to improve data access for developers alarmed their application performance is slowly degrading.12 p.m. – As lunch approaches, identify where I can save data for a scheduled backup, having noticed unforeseen space problems, and successfully capture my backup.To read this article in full, please click here
Digital transformation requires companies to be nimbler, more proactive, and more responsive to customers. Our always-on culture has begotten the need for always-available data.Meanwhile, the tolerance for downtime continues to plummet. Whether it’s a bank customer conducting a financial transaction or a salesperson submitting an order, a processing delay is no longer acceptable. An interruption like this sets off an IT scramble to determine how to fix that “app-data gap” — i.e., what’s causing delays in data delivery to applications.To alleviate the app-data gap and improve data-center operations, many organizations have turned to flash storage, which speeds delivery and improves performance. And while it does provide much-improved efficiency and speed than traditional hard drive disk storage, flash alone doesn’t solve other problems like configuration and interoperability issues that cause the app-data gap.To read this article in full, please click here
Intel formally introduced the Optane DC persistent memory modules late last week, an entirely new class of memory and storage technology designed to sit between storage and memory and provide expanded memory capacity and faster access to data.Unlike SSDs, which plug into a PCI Express slot, Optane DC is built like a thick memory DIMM and plugs into the DIMM slots. Many server motherboards offer as many as eight DIMM slots per CPU, so some can be allocated to Optane and some to traditional memory.That’s important because Optane serves as a cache of sorts, storing frequently accessed data in its memory rather than forcing the server to fetch it from a hard disk. So, server memory only has to access Optane memory, which is sitting right next to it, and not a storage array over Fibre Channel.To read this article in full, please click here
According to the McKinsey Global Institute, IoT will have a total potential impact of up to $11.1 trillion a year by 2025. With so much opportunity, it makes sense why so many companies are looking to connect their devices and enter the IoT arena.But simply adding an internet connection to your widget doesn’t mean your business will make immediate profits. IoT products come with significant ongoing costs – web infrastructure, networking, and other connectivity and data-related costs. If you can’t justify the additional value to your customers, those costs will eat away at your margins.To read this article in full, please click here
IBM continues to mold the Big Iron into a cloud and devops beast.This week IBM and its long-time ally CA teamed up to link the mainframe and its Cloud Managed Services on z Systems, or zCloud, software with cloud workload-development tools from CA with the goal of better performing applications for private, hybrid or multicloud operations.[ For more on mainframes, read: The (mostly) cool history of the IBM mainframe and Why are mainframes still in the enterprise data center? | Get regularly scheduled insights by signing up for Network World newsletters. ]
IBM says zCloud offers customers a way to move critical workloads into a cloud environment with the flexibility and security of the mainframe. In addition, the company offers the IBM Services Platform with Watson that provides another level of automation within zCloud to assist clients with their moves to cloud environments.To read this article in full, please click here
In a previous blog post, 5 reasons to buy refurbished Cisco equipment, I talked about five facts to keep in mind as you consider how to proceed with your Cisco hardware solutions.Well, my engineering group reminded me of something else to consider for any hardware solution, not just a Cisco solution.Cabling![ Read also: Getting grounded in intent-based networking ]
It seems that cabling can be an afterthought. Sure, you just used a blended solution of new and pre-owned hardware, where each makes the most sense in your infrastructure and creates a unique and potentially game-changing opportunity to maximize value in your investments.To read this article in full, please click here
Traditional location positioning such as GPS isn’t going to be suitable for a Location of Things world filled with Internet of Things (IoT) sensors, say experts. The centralized, anchor-like system we use now, as found in GPS, mobile network cell tower positioning services, and Wi-Fi-based location positioning, is going to be a problem. The usual suspects being bandwidth, excessive power use, and cost.The problem is IoT devices are required to communicate with positioning anchors, whether it be satellites or radio towers. That’s bandwidth-intensive; it can use a significant amount of power to cover the distances, as well as to power the multiple chips needed. The system is also conceivably susceptible to congestion as the numbers of devices increases — projections are for billions and billions of IoT things worldwide, ultimately.To read this article in full, please click here