Archive

Category Archives for "Network World Data Center"

Fiber transmission range leaps to 2,500 miles, and capacity increases

Fiber transmission could be more efficient, go farther, carry more traffic and be cheaper to implement if the work of scientists in Sweden and Estonia is successful.In a recent demonstration, researchers at Chalmers University of Technology, Sweden, and Tallinn University of Technology, Estonia, used new, ultra-low-noise amplifiers to increase the normal fiber-optic transmission link range six-fold.And in a separate experiment, researchers at DTU Fotonik, Technical University of Denmark used a unique frequency comb to push more than the total of all internet traffic down one solitary fiber link.[ Read also: How Google is speeding up the Internet ] Fiber transmission limits Signal noise and distortion have always been behind the limits to traditional (and pretty inefficient) fiber transmission. They’re the main reason data-send distance and capacity are restricted using the technology. Experts believe, however, that if the noise that’s found in the amplifiers used for gaining distance could be cleaned up and the signal distortion inherent in the fiber itself could be eliminated, fiber could become more efficient and less costly to implement.To read this article in full, please click here

Data center power efficiency increases, but so do power outages

A survey from the Uptime Institute found that while data centers are getting better at managing power than ever before, the rate of failures has also increased — and there is a causal relationship.The Global Data Center Survey report from Uptime Institute gathered responses from nearly 900 data center operators and IT practitioners, both from major data center providers and from private, company-owned data centers.It found that the power usage effectiveness (PUE) of data centers has hit an all-time low of 1.58. By way of contrast, the average PUE in 2007 was 2.5, then dropped to 1.98 in 2011, and to 1.65 in the 2013 survey.To read this article in full, please click here

Intel continues to optimize its products around AI

Normally, this is the time of year when Intel would hold its Intel Developer Forum conference, which would be replete with new product announcements. But with the demise of the show last year, the company instead held an all-day event that it live-streamed over the web.The company’s Data Centric Innovation Summit was the backdrop for a series of processor and memory announcements aimed at the data center and artificial intelligence, in particular. Even though Intel is without a leader, it still has considerable momentum. Navin Shenoy, executive vice president and general manager of the Data Center Group, did the heavy lifting.News about Cascade Lake, the rebranded Xeon server chip First is news around the Xeon Scalable processor, the rebranded Xeon server chip. The next-generation chip, codenamed “Cascade Lake,” will feature a memory controller for Intel’s new Intel Optane DC persistent memory and an embedded AI accelerator that the company claims will speed up deep learning inference workloads by eleven-fold compared with current-generation Intel Xeon Scalable processors.To read this article in full, please click here

Cisco, Arista settle lawsuit, refocus battle on network, data center, switching arenas

After nearly four years of slashing at each other in court with legal swords Cisco and Arista have agreed to disagree, mostly.To settle the litigation mêlée, Arista has agreed to pay Cisco $400 million, which will result in the dismissal of all pending district court and International Trade Commission litigation between the two companies.  [ Related: How to plan a software-defined data-center network ] For Arista the agreement should finally end any customer fear, uncertainty and doubt caused by the lawsuit.  In fact Zacks Equity Research wrote the settlement is likely to immensely benefit Arista.To read this article in full, please click here

Enterprises should be able to sell their excess internet capacity

Peer-to-peer exchanges of excess bandwidth could one day be commonplace, says a firm that is attempting to monetize redundant internet capacity. It wants to create a marketplace for selling internet data throughput that has been already bought by organizations, but which is often dormant during out-of-work hours — the bandwidth is customarily just lying around then, not being used.Dove Network wants to “do to the telecom industry what Airbnb did to the hotel industry,” co-founder Douglas Schwartz told me via email.The idea is that those with excess data capacity, such as a well-provisioned office or data center, which may not be using all of its throughput capacity all of the time — such as during the weekend — allocates that spare bandwidth to Dove’s network. Passing-by data-users, such as Internet of Things-based sensors or an individual going about business, would then grab the data it, he, or she needs; payment is then handled seamlessly through blockchain smart contracts.To read this article in full, please click here

Cisco upgrade enables SD-WAN in 1M+ ISR/ASR routers

Cisco is moving rapidly toward its ultimate goal of making SD-WAN features ubiquitous across its communication products, promising to boost network performance and reliability of distributed branches and cloud services.The company this week took a giant step that direction by adding Viptela SD-WAN technology to the IOS XE software that runs its core ISR/ASR routers. Over a million of ISR/ASR edge routers, such as the ISR models 1000, 4000 and ASR 5000 are in use by organizations worldwide.[ Related: MPLS explained -- What you need to know about multi-protocol label switching]To read this article in full, please click here

Intel ends the Xeon Phi product line

You can scratch the Xeon Phi off your shopping list. And if you deployed it, don’t plan on upgrades. That's because Intel has quietly killed off its high-performance computing co-processor because forthcoming Xeon chips have all the features of the Phi, no separate chip or add-in card needed.Intel quietly ended the life of the Xeon Phi on July 23 with a “Product Change Notification” that contained Product Discontinuance/End of Life information for the entire Knight’s Landing line of Xeon Phis.The last order date for the Xeon Phi is Aug. 31, 2018, and orders are non-cancelable and non-returnable after that date. The final shipment date is set for July 19, 2019.To read this article in full, please click here

Network World: Edge, Intent-based networking are all the rage; IT networking budgets rise

As distributed resources from wired, wireless, cloud and Internet of Things networks grow, the need for a more intelligent network edge is growing with it.Network World’s 8th annual State of the Network survey shows the growing importance of edge networking, finding that 56% of respondents have plans for edge computing in their organizations. [ Related: How to plan a software-defined data-center network ] Typically, edge networking entails sending data to a local device that includes compute, storage and network connectivity in a small form factor. Data is processed at the edge, and all or a portion of it is sent to the central processing or storage repository in a corporate data center or infrastructure-as-a-service (IaaS) cloud.To read this article in full, please click here

Computing should be based on light, not electricity, scientists say

Light-carrying, miniature wires are potentially more efficient for computing than other forms of interconnects, including copper and larger optical systems, say experts.However, there’s been a problem in getting such a nanowire system to work, the University of North Carolina at Chapel Hill, explains in an article published on Science Daily.“There hasn't been a controlled method for selectively sending light down along nanoscale wires,” says James Cahoon, a professor in the College of Arts and Sciences. “Optical technology has either used much larger structures or wasted a lot of light in the process.” Creating light uses power, defeating the object, for one thing.To read this article in full, please click here

Google, Cisco amp-up enterprise cloud integration

The joint Google and Cisco Kubernetes platform for enterprise customers should appear before the end of the year, and things are getting warm between the two companies ahead of that highly anticipated release.Cisco and Google last October teamed up to develop a Kubernetes hybrid-cloud offering.  Kubernetes, originally designed by Google, is an open-source-based system for developing and orchestrating containerized applications.RELATED: How to make hybrid cloud work The Cisco/Google combination – which is currently being tested by early access enterprise customers, according to Google – will let IT managers and application developers use Cisco tools to manage their on-premises environments and link it up with Google’s public IaaS cloud which offers orchestration, security and ties to a vast developer community.To read this article in full, please click here

IDG Contributor Network: Are you winning the data intelligence game?

When big data was hyped as the next technology set to transform the business world many organizations began to collect as much of it as they could lay their hands on. Data centers proliferated as companies sucked in data points from customer interactions, supply chain partners, mobile devices and many, many other sources.It looked as though enterprises had jumped on board with the idea of big data, but what they were actually doing was hoarding information. Very few had any idea about how to unlock the insights contained within. Businesses that saw the value and pioneered analytics are beginning to see a major return on their investment.In a global, cross-industry McKinsey survey of 530 C-level execs and senior managers, almost half said that data and analytics have significantly or fundamentally changed business practices in their sales and marketing functions, and more than a third said the same about R&D. Big data can have a major beneficial impact, but realizing those potential benefits requires a winning strategy.To read this article in full, please click here

Cloud computing just had another kick-ass quarter

If you’ve been around the tech industry long enough, recent market events held an eerie familiarity.When Facebook badly missed its numbers for the quarter ended June 30, 2018, the company’s stock took an unprecedented pummeling, losing 20 percent of its value and tanking many other tech stock along with it. Watching the carnage, it was hard not to think back to the spring of 2000 when Microsoft lost its antitrust case, losing 15 percent of its value in a single day and signaling the end of the dot.com boom and the beginning of a historic bust.But something is different this time.To read this article in full, please click here

Google finally throws some weight behind on-premises services

One of the early knocks on Google’s cloud services is that it assumed a pure cloud play for every customer and had virtually nothing for supporting on-premises systems. While that might work for smaller businesses looking to shut down their data center and move to the cloud, those customers were in the minority.At this week's Google Cloud Next '18 show, Google reversed course and acknowledged the on-premises market with the announcement of the Cloud Services Platform, an integrated suite of cloud services designed for organizations with workloads that are staying on premises.To read this article in full, please click here

Lenovo gets into the on-premises cloud game with ThinkAgile CP

Lenovo has launched a new product line called ThinkAgile CP that consists of Lenovo ThinkSystem hardware and Cloudistics software for what it calls a “composable cloud,” or cloud-in-a-box, where the attributes of cloud multi-tenancy are available to organizations behind their firewall.Basically it’s a hyperconverged system preconfigured to work right out of the box and operate inside a data center much like a cloud service provider. Compute, storage, and networking are designed to connect to the ThinkAgile CP Cloud Controller, which in turn lets an IT administrator spin up multi-tenant provisioning. Software-defined compute, storage, and networking can be achieved in just a few clicks.To read this article in full, please click here

How edge networking and IoT will reshape data centers

The Internet as we have all known it mirrors the design of old mainframes with dumb terminals: The data path is almost entirely geared toward data coming down the network from a central location. It doesn’t matter if it’s your iPhone or a green text terminal, the fast pipe has always been down, with relatively little data sent up.To read this article in full, please click here(Insider Story)

Multi-cloud monitoring keeps Q2 integrated operations center humming

Five years ago, Q2 had 240 servers. Today it has 8,500 servers. The company spent $150 million over the last five years building out its infrastructure, where it now hosts more than 4 petabytes of user data.“We’ve grown from 1.2 million users to 11.5 million users and reduced downtime to one-fifth of what it was during that same period,” says Lou Senko, CIO of Q2, which provides a digital banking platform for banks and credit unions. [ Related: How to plan a software-defined data-center network.] Headquartered in Austin, Texas, Q2’s cloud-based platform is aimed at helping smaller, community-based financial institutions compete with giants such as Bank of America, Wells Fargo and Citigroup. “Local financial institutions have to compete against some big, big players,” Senko says. “It’s our technology that levels the playing field in the digital world.”To read this article in full, please click here

IDG Contributor Network: Communications hubs emerge as a bridge to hybrid IT

Adoption of hybrid IT for delivery of applications across legacy enterprise data centers, and increasingly cloud SaaS and IaaS platforms, is rendering traditional network architectures obsolete. Numerous analysts and articles have predicted the coming obsolescence of hub and spoke MPLS networks anchored on legacy enterprise data centers. While few have detailed what to do about it, a growing number of enterprises are taking matters into their own hands. Those in the know are leveraging communication hubs, sometimes also referred to as cloud hubs, to bridge the gap between their legacy data center environments and the cloud.The growing challenge of SaaS application performance As enterprises accelerate their move to cloud, including the growing trend toward cloud office suites, such as Office 365 and Google Suite, where users expect LAN-like performance, challenges are mounting. According to Microsoft, Office 365 is growing at 43 percent, and as of the end of 2017 was boasting 120 million active users. A 2017 survey by TechValidate noted that despite increasing both firewall and network bandwidth capacity, nearly 70 percent of companies experienced weekly network-related performance issues after deploying Office 365. Gartner’s 2018 Strategic Roadmap for Networking, released earlier this year, noted that nearly all enterprises Continue reading

Why NVMe? Users weigh benefits of NVMe-accelerated flash storage

IBM has an answer for some of the biggest trends in enterprise data storage – including Non-Volatile Memory Express (NVMe), artificial intelligence, multi-cloud environments and containers – and it comes in a 2U package.The new FlashSystem 9100 is an all-flash NVMe-accelerated storage platform. It delivers up to 2 petabytes of effective storage in 2U and can provide up to 32 petabytes of all-flash storage in a 42U rack.[ Check out AI boosts data-center availability, efficiency. Also learn what hyperconvergence is and whether you’re ready for hyperconverged storage. For regularly scheduled insights sign up for Network World newsletters. ] NVMe is a protocol for accessing high-speed storage media that’s designed to reduce latency and increase system and application performance. It's optimized for all-flash storage systems and is aimed at enterprise workloads that require low latency and top performance, such as real-time data analytics and high-performance relational databases.To read this article in full, please click here

1 67 68 69 70 71 172