Enterprise applications are subjected to intense but unpredictable loads. Ensuring consistent application delivery, in line with Quality of Service (QoS) guarantees, requires sophisticated load balancing and related capabilities for clustering, performance management and so forth. Application Delivery Controllers perform these tasks, helping application owners deliver a reliable, fast application user experience.To read this article in full, please click here(Insider Story)
HPE is rolling out the next generation of its Nimble Storage platform, overhauled to better meet the ever-increasing performance demands on data-center workloads, including real-time web analytics, business intelligence, and mission-critical enterprise resource applications.The new HPE Nimble Storage All Flash arrays as well as Nimble Adaptive Flash arrays for hybrid implementations (mixing solid state drives and hard disk drives, for example), are generally available from May 7 and have both been engineered to support NVMe (non-volatile memory express), an extremely fast communications protocol and controller designed to move data to and from SSDs via the PCIe bus standard. NVMe SSDs are expected to offer two orders of magnitude speed improvement over prior SSDs.To read this article in full, please click here
A job listing on Intel’s official webpage for a senior CPU micro-architect and designer to build a revolutionary microprocessor core has fueled speculation that the company is finally going to redesign its Core-branded CPU architecture after more than 12 years.Intel introduced the Core architecture in 2006, and that was an iteration of the P6 microarchitecture first introduced with the Pentium Pro in 1995. So, in some ways, Intel in 2018 is running on a 1995 design. Even though its tick/tock model called for a new microarchitecture every other year, the new architecture was, in fact, just a tweak of the old one and not a clean sheet design.The job is based in the Intel's Hillsboro, Oregon, facility, where all of the major development work is done. It initially said “join the Ocean Cove team to deliver Intel’s next-generation core design in Hillsboro, Oregon.” That entry has since been removed from the posting.To read this article in full, please click here
A startup funded by Cisco and featuring some big-name talent has come out of stealth mode with the promise of unifying data stored across multiple distributed data centers.RStor is led by Giovanni Coglitore, the former head of the hardware team at Facebook and before that CTO at Rackspace. The company also features C-level talent who were veterans of EMC’s technology venture capital arm, Amazon Web Services, Microsoft, Google, VMware, Dropbox, Yahoo, and Samsung.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers. | Get regularly scheduled insights by signing up for Network World newsletters. ]
Bouyed by $45 million in venture capital money from Cisco, the company has announced RStor, a “hyper-distributed multicloud platform” that enables organizations to aggregate and automate compute resources from private data centers, public cloud providers, and trusted supercomputing centers across its networking fabric.To read this article in full, please click here
Dell EMC this week unveiled storage, server and hyperconvergence upgrades aimed at enterprises that are grappling with new application types, ongoing digital transformation efforts, and the pressure to deliver higher performance and greater automation in the data center.On the storage front, Dell EMC rearchitected its flagship VMAX enterprise product line, which is now called PowerMax, to include NVMe support and a built-in machine learning engine. Its XtremIO all-flash array offers native replication for the first time and a lower entry-level price. To read this article in full, please click here
Rackspace’s latest project is called Private Cloud Everywhere and is a collaboration with VMware to offer what it calls Private Cloud as a Service (PCaaS), making on-demand provisioning of virtualized servers available at most colocation facilities and data centers.PCaaS basically means provisioning data center hardware the same way you would on Amazon Web Services, Microsoft Azure or Google Cloud, but instead of using the cloud providers, you use your own hardware, use Rackspace data centers, or set it up in a third-party colocation facility.Because customers have the option of deploying a private cloud wherever they want physically, it can help with data sovereignty requirements, such as rules in Europe that restrict data inside national borders.To read this article in full, please click here
If the numbers weren’t so clear, I’m not sure I’d have believed it was possible. After all, an enormous business that earned $17.46 billion in 2017 and grew approximately 45 percent is bound to slow down, right? Simply maintaining that insane level of growth would be virtually impossible, right?But somehow, Amazon Web Services (AWS) is actually boosting its growth. After expanding by 42 percent in the third quarter of 2017, AWS increased that number to 45 percent in the fourth quarter … and then did it again, growing 49 percent in the first quarter of calendar 2018.To read this article in full, please click here
As Tom Brady once said, “if you don’t play to win, don’t play at all.” This mantra is reflected in Brady’s TB12 Method, a holistic lifecycle based on 12 principles that make up the optimal approach to exercise, training and living a life of vitality. It’s built on the premise of stopping an accident prior to it happening through pre-habitation and taking intelligent, strategic preventative measures. And while implementing all 12 principles is not required, their effect is cumulative: the more you can incorporate, the better your results will be.While this methodology applies to training superior athletes, data center managers have their own set of principles to ensure optimal data center performance. What we’ve dubbed the DCM10 Method, there are 10 fundamental steps every data center manager should use to evaluate their current data center and help transform its functionality. Similar to the TB12 Method, while not all of these steps are required to create a successful strategy, the more best practices applied will result in higher performance across the board.To read this article in full, please click here
Cisco this week fortified its storage family with two new 32G Fibre Channel switches and software designed to help customers manage and troubleshoot their SANS.The new switches, the 48-Port MDS 9148T and 96-Port MDS 9396T feature a technology called Auto Zone that automatically detects any new storage servers or devices that log into a SAN and automatically zones them without having to do manual configuration.[Check out REVIEW: VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions . | Get regularly scheduled insights by signing up for Network World newsletters. ]
The idea is to eliminate the cycles spent in provisioning new devices and avert errors that typically occur when manually configuring complex zones. Even when a host or storage hardware is upgraded or a faulty facility is replaced, the switch automatically detects the change and zones them into the SAN, Cisco said.To read this article in full, please click here
If you are a large-scale enterprise, Google has a service called Dedicated Interconnect that offers 10Gbps connections between your data center and one of theirs. But what if you are a smaller firm and don’t need that kind of bandwidth and the expense that goes with it?Google now has you covered. The cloud giant recently announced Google Cloud Partner Interconnect, a means of establishing a direct connection between a SMB data center, with emphasis on the medium-sized business, and Google's hybrid cloud platform. The company did this in concert with 23 ISP partners around the globe.To read this article in full, please click here
Cyxtera Technologies has launched the Cyxtera Extensible Data Center (CXD) platform, a software platform for data centers that offers customers rapid on-demand provisioning to a host of colocation and connectivity services.Through a combination of a network and services provisioning engine and an intra-data center software-defined network fabric, the CXD platform allows colocation customers to provision services on demand or via a web console.CXD brings cloud-like experience to colocation
CXD comes with two key features: the Unified Services Port and Network Exchange. The Unified Services Port enables access to multiple data center services over a single physical port, while the Network Exchange provides automated provisioning to select network service providers. The caveat is they must also be running CXD.To read this article in full, please click here
Facing the reality that many enterprise data-center managers now work in a hybrid cloud environment, Juniper Networks is set to release Contrail Enterprise Multicloud, a software package designed to monitor and manage workloads and servers deployed across networking and cloud infrastructure from multiple vendors.Enterprises are moving to the cloud for operational efficiency and cost optimization, but at the moment most big companies are operating hybrid environments, which has added to the complexity of managing computing infrastructure.[ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ]
Juniper is competing with a variety of networking and multicloud orchestration tools from major data center players, including VMware's NSX, Cisco's ACI, and HPE's OneSphere. What's more, Juniper does not have as big a presence in the data center as some of its rivals, particularly Cisco.To read this article in full, please click here
A job posting on Facebook has led to speculation that the company is building a team to design its own semiconductors, thus ending their reliance on Intel. If so, it would be another step in the trend of major firms building their own silicon.Bloomberg was the first to note a job opening, titled “Manager, ASIC Development,” that sought a manager to help build an "end-to-end SoC/ASIC, firmware and driver development organization." There is also an opening for an “ASIC & FPGA Design Engineer,” which seems an unusual position for a social network website to need.To read this article in full, please click here
Cray owes its survival to AMD. The company was bought by SGI in 1996, hollowed out, and spun off in 2000 with very little left. SGI had taken most of the talent and IP.Desperate for a win, Cray began working with Sandia National Labs in 2002 to build a supercomputer based on x86 technology. Intel at the time was dismissive of 64-bit x86 and was promoting Itanium. AMD had other plans and was in the process of developing Athlon for desktops and Opteron for servers.[ Learn how server disaggregation can boost data center efficiency and find out what the top 10 fastest supercomputers are. | Get regularly scheduled insights by signing up for Network World newsletters. ]
The project came to be known as Red Storm, starting with single-core Opterons and upgrading to dual- and quad-core CPUs as they hit the market. Red Storm ranked as high as number two on the Top 500 list of supercomputers. More important, it served as the basis for the XT3 line of supercomputers that revived Cray as a player in that field, and lit a fire under Intel as well.To read this article in full, please click here
With an upcoming data tsunami expected to absorb up to 20 percent of global electricity by 2025, according to some experts, data center energy sources are a hot talking point — the photovoltaic solar panel being one of the hottest and most viable fossil fuel alternatives.However, there’s an obvious problem with the solar panel as electricity source: When sunlight drops off on cloudy or rainy days, so does power output.Chinese scientists, though, think they have a solution, and that’s to develop a generalized hybrid panel that also harnesses the power of rain. It compensates for lack of sun on iffy days and at night.To read this article in full, please click here
Cisco has rolled out software tools for helping customers control access and more easily manage the burgeoning amount of enterprise IoT devices in their networks. The company has also begun filling out its Catalyst 9000 line of intent-based networking (IBN) switches with new boxes aimed at customers wanting 100G/sec and 25G/sec network migration options.[ For more on IoT see tips for securing IoT on your network, our list of the most powerful internet of things companies and learn about the industrial internet of things. | Get regularly scheduled insights by signing up for Network World newsletters. ]
IoT access control, security, management
The need for much better enterprise IoT access control is obvious, Cisco says: According to its Midyear Cybersecurity Report for 2017 most companies are not aware of what IoT devices are connected to their network.To read this article in full, please click here
A French company called Accelize has launched AccelStore, an app store specifically around providing custom programmed applications for FPGA accelerators.FPGAs are dedicated processors known for doing two things: very fast processing, and being reprogrammable. CPUs have to be general-purpose processors that run an OS, but an FPGA has the luxury of doing a dedicated task, so the architecture is different.The problem is that while FPGAs are reprogrammable to do new, specific tasks, they aren’t that easy to program. In fact, it’s often pretty hard to do. That’s Accelize’s sales pitch. Rather than writing the code to reprogram the FPGAs in your servers, it has the templates for you.To read this article in full, please click here
We live in an age of instrumentation, where everything that can be measured is being measured so that it can be analyzed and acted upon, preferably in real time or near real time. This instrumentation and measurement process is happening in both the physical world, as well as the virtual world of IT.For example, in the physical world, a solar energy company has instrumented all its solar panels to provide remote monitoring and battery management. Usage information is collected from a customers’ panels and sent via mobile networks to a database in the cloud. The data is analyzed, and the resulting information is used to configure and adapt each customer’s system to extend the life of the battery and control the product. If an abnormality or problem is detected, an alert can be sent to a service agent to mitigate the problem before it worsens. Thus, proactive customer service is enabled based on real-time data coming from the solar energy system at a customer’s installation.To read this article in full, please click here
Ask a group of IT leaders to define what a hybrid cloud is, and their answers are likely to be as diverse as the companies they work for. Once a black-and-white definition describing organizations with a mix of apps that reside in both the public cloud and in enterprise data centers, a hybrid cloud has now become much more complicated as the number of apps used in the enterprise grows and their integration requirements mount. The average company uses more than 1,400 cloud services, according to Skyhigh Networks, often because lines of business are being pressured to innovate, leaving little time to develop a strategy for the most efficient and cost-effective way to run them.To read this article in full, please click here
Serverless computing is an emerging trend that is likely to explode in popularity this year. It takes the idea of a smaller server footprint to the next level. First, there were virtual machines, which ran a whole instance of an operating system. Then they were shrunk to containers, which only loaded the bare minimum of the OS required to run the app. This led to a smaller footprint.Now we have “serverless” apps, which is a bit of a misnomer. They still run on a server; they just don’t have a dedicated server, virtual machine, or container running 24/7. They run in a server instance until they complete their task, then shut down. It’s the ultimate in small server footprint and reducing server load.To read this article in full, please click here