What is coherent optics?At its most basic, coherent optical transmission is a technique that uses modulation of the amplitude and phase of the light, as well as transmission across two polarizations, to enable the transport of considerably more information through a fiber optic cable. Using digital signal processing at both the transmitter and receiver, coherent optics also offers higher bit-rates, greater degrees of flexibility, simpler photonic line systems, and better optical performance. It’s a web-scale world. On-demand content, bandwidth-hungry mobile apps, high-definition video streaming, and new cloud-based IT applications are driving massive scale and unpredictable traffic patterns. Network capacities are increasing by 25% to 50% every year, and systems running at 10 Gb/s just cannot keep up with this kind of rapid scalability.To read this article in full, please click here
We live in an exciting era for IT. Countless new technologies are changing how networks are built, how access is provided, how data is transmitted and stored, and much more. Cloud, IoT, edge computing and machine learning all offer unique opportunities for organizations to digitally transform the way they conduct business. Different as these technologies are, they are unified by their dependence on a properly functioning network, on what might be called “network continuity.” The key component for achieving network continuity is visibility.It’s no secret that new and emerging technologies have always driven networking best practices. With such a wide range of business objectives and activities relying on IT, network performance really is a life or death issue for most companies. So, it’s critical that we maintain a firm grasp on the latest industry trends in order to make informed, strategic network management decisions.To read this article in full, please click here
Looking to seriously amplify the use of fog computing, the IEEE has defined a standard that will lay the official groundwork to ensure that devices, sensors, monitors and services are interoperable and will work together to process the seemingly boundless data streams that will come from IoT, 5G and artificial intelligence systems.The standard, known as IEEE 1934, was largely developed over the past two years by the OpenFog Consortium which includes Arm, Cisco, Dell, Intel, Microsoft and Princeton University. To read this article in full, please click here
If you think you know the problems facing the Internet of Things (IoT), a new Deloitte report, Five vectors of progress in the Internet of Things, offers a great chance to check your assumptions against the IoT experts.Despite the fancy-pants “vectors of progress” language, the report’s authors — David Schatsky, Jonathan Camhi, and Sourabh Bumb — basically lay out the IoT’s chief technical challenges and then look at what’s being done to address them. Some of the five are relatively well-known, but others may surprise you.To read this article in full, please click here
An increasing number of businesses are moving their data to the cloud to take advantage of the cost, scalability and efficiency benefits associated with not having to procure or maintain significant amounts of hardware. And indeed, cloud data storage can certainly help organizations achieve superior ROI; however, oftentimes when choosing a cloud-only file system such as Box or Dropbox, these organizations encounter significant problems – some of which can actually outweigh the benefits. These problems include:
Due to inherent limitations in cloud protocols, accessing files from the cloud is rife with latency. This is particularly prevalent when accessing large files or simultaneously accessing a large number of files.
Active directory access permission control. The permission schemes for cloud-based file systems are often different than your on-premises environment, causing Active Directory permissions to become an issue for both user and administrator levels.
User interface. Losing the familiar file server interface, especially the mapped letter drive interface for a network share, forces users to learn and entirely new user interface. In addition to the increased stress, it can also reduce user efficiency in the short term.
Shadow IT. Since the files are no longer located within the company’s infrastructure, IT Managers lose Continue reading
Quick to install, safe to run, easy to update, and dramatically easier to maintain and support, snaps represent a big step forward in Linux software development and distribution. Starting with Ubuntu and now available for Arch Linux, Debian, Fedora, Gentoo Linux and openSUSE, snaps offer a number of significant advantages over traditional application packaging. Compared to traditional packages, snaps are:
easier for developers to build
faster to install
automatically updated
autonomous
isolated from other apps
more secure
non-disruptive (they don't interfere with other applications)
So what are snaps?
Snaps were originally designed and built by Canonical for use on Ubuntu. The service might be referred to as “snappy”, the technology “snapcraft”, the daemon “snapd” and the packages “snaps”, but they all refer to a new way that Linux apps are prepared and installed. Does the name “snap” imply some simplification of the development and installation process? You bet it does!To read this article in full, please click here
After years of bouncing from one company to another, the Lustre file system that is so popular in high-performance computing (HPC) has been sold yet again. This time it went to an enterprise storage vendor. Finally, it’s in the hands of a company that makes sense.DataDirect Networks (DDN) announced it purchased the Lustre File System from Intel this week as Intel looks to pare down non-essential products. DDN got all assets and the Lustre development team, who are undoubtedly relieved. The announcement was made at the International Supercomputing Conference (ISC) in Frankfurt, Germany.[ Check out AI boosts data-center availability, efficiency. Also learn what hyperconvergence is and whether you’re ready for hyperconverged storage. | For regularly scheduled insights, sign up for Network World newsletters. ]
Lustre's history
Lustre (which is a portmanteau of Linux and cluster) is a parallel distributed file system that supports multiple computer clusters with thousands of nodes. It started out as an academic research project and was later acquired by Sun Microsystems, which was in turn acquired by Oracle.To read this article in full, please click here
Google is adding to its cloud storage portfolio with the debut of a network attached storage (NAS) service.Google Cloud Filestore is managed file storage for applications that require a file system interface and shared file system for data. It lets users stand up managed NAS with their Google Compute Engine and Kubernetes Engine instances, promising high throughput, low latency and high IOPS.[ Check out AI boosts data-center availability, efficiency. Also learn what hyperconvergence is and whether you’re ready for hyperconverged storage. | For regularly scheduled insights sign up for Network World newsletters. ]
The managed NAS option brings file storage capabilities to Google Cloud Platform for the first time. Google’s cloud storage portfolio already includes Persistent Disk, a network-attached block storage service, and Google Cloud Storage, a distributed system for object storage. Cloud Filestore fills the need for file workloads, says Dominic Preuss, director of product management at Google Cloud.To read this article in full, please click here
Frankly (no pun intended), I have to admit that I’m growing increasingly frustrated with certain trends in networking.For example, it’s not that I don’t like the dream or idea of software-defined networking (SDN) — it’s not that I don’t think it’s superior to the older way of setting up or monitoring a network. It’s just that I’m becoming increasingly concerned that small- to medium-size enterprises (SMEs) won’t be able to keep up. And the media that follows this trend isn’t really brining to light the extreme cost of some of these systems.Pricewise, many of the product lines are intended for large networks. There's no way that a smaller company could even begin to afford them. For example, one trainer told me that a certain SDN product was scaled to start at 500 site deployments!!To read this article in full, please click here
Data is essential to the smooth operation of any organization. Whether it’s data on your products, customers, or competition, you need it to do business. Your software and systems are dependent on the data that’s fed into them.Big data may be gathered by IoT sensors in vehicles and buildings, smartphones, and from countless other data points to inform big decisions. But at a granular level you also need small pieces of data to function. Without credentials you can’t gain access to the big data, contact suppliers, or even tweak the air conditioning system.Our dependence on data is profound, you might say it’s your business DNA because it’s crucial for survival and growth.To read this article in full, please click here
Any organization that has even a modest level of IT infrastructure does IT service monitoring (ITSM) to ensure that everything is operating within performance mandates codified in service level agreements (SLAs). If the IT organization is meeting its SLAs, it’s assumed that the experience the employee has interacting with this infrastructure is good. But that isn’t always the case, and the IT group might not even be aware.For instance, an enterprise application might be working just fine for most users, but for one or a few users in particular, it could be especially slow. Unless those people call the help desk to complain, who would ever know that they are suffering? Sometimes people just accept that some aspect of IT functions poorly, and they carry on the best they can, even if it affects their productivity.To read this article in full, please click here
Coinciding with a signing-off of global standardizations for the as-yet-unlaunched 5G radio technology by 3GPP this month we get news of initial development plans for faster 6G wireless. The Center for Converged TeraHertz Communications and Sensing (ComSenTer) says it’s investigating new radio technologies that will make up 6G.One hundred gigabits-per-second speeds will be streamed to 6G users with very low latency, the group says on its website.To read this article in full, please click here
Ciena
Helen XenosSenior Director, Portfolio Marketing
The era of 400G was in full force at OFC (The Optical Fiber Communication Conference and Exhibition), and we took some time to celebrate this milestone. Here are what our customers have to say about their 400G success – and what it means for the industry.If you needed another leading indicator that 400G is the next big thing in optical networks, look no further than the OFC ’18 conference recently held in San Diego. The show was abuzz with vendor plans for new technologies that squeeze more bandwidth than ever down an optical channel. These 400G-capable coherent solutions offer better system performance and tunable capacity from 100G to 400G per wavelength. To read this article in full, please click here
Like any industry, networking has a proprietary slew of acronyms and jargon that only insiders understand. Look no further than Network World’s searchable glossary of wireless terms.Turns out, multiplexing has nothing to do with going to the movies at a place with more than one theater.I also like to think that each networker has their own favorite list of terms, ready to share at a moment’s notice during family dinners, holidays and networking events … or maybe that’s just me?To read this article in full, please click here
The data-center network is a critical component of enterprise IT’s strategy to create private and hybrid-cloud architectures. It is software that must deliver improved automation, agility, security and analytics to the data center network. It should allow for the seamless integration of enterprise-owned applications with public cloud services. Over time, leading edge software will enable the migration to intent-based data-center networks with full automation and rapid remediation of application-performance issues.To read this article in full, please click here(Insider Story)
The topic of SD-WAN has been a hot one over the past several years. This makes sense because in most companies, the WAN hasn’t been updated for decades and SD-WANs have the potential to modernize the network and bring it into alignment with the rest of IT.However, like most new technologies, I find there are a number of common misconceptions when it comes to SD-WANs. Part of the problem is that the vendor ecosystem has exploded, and the many vendors that approach the market from different angles muddy the waters — making it hard to discern what’s real, what’s misleading, and what's downright wrong.[ Click here to find out more about SD-WAN and why you’ll use it one day and learn about WANs and where they’re headed. | Get regularly scheduled insights by signing up for Network World newsletters. ]
The top SD-WAN myths
To help buyers make sense of what's happening in the SD-WAN world, here are seven myths to watch out for — and why they aren't correct.To read this article in full, please click here
Data sprawl is a problem. Most companies, regardless of industry or size, are trying to balance their need to store increasing volumes of data with the associated costs to their infrastructure.IDC expects continued growth in data, with an estimate that the world will generate 163 zettabytes by 2025. The massive build up is being driven by technologies including machine learning, AI, IoT, video streaming, and augmented and virtual reality. Add digital transformation efforts into the mix and the requirements for data storage become even greater.The straightforward answer is to the data sprawl problem is to add capacity. But that option is often made untenable by variables such as costs, next-generation workloads, increased amounts of unstructured data, and growth of the business and its locations.To read this article in full, please click here
Last week, AT&T said it would launch a Narrow Band-IoT (NB-IoT) network in the United States and Mexico. And this isn’t the first network dedicated to the Internet of Things that AT&T is working on. The carrier had previously announced an IoT network using the LTE-M standard to cover some 400 million people in the U.S. and Mexico by the end of last year.Just as important, many other U.S. carriers also have various flavors of low-power IoT networks in the works, including Verizon, T-Mobile, Sprint, and even Dish Network.To read this article in full, please click here
The SD-WAN market is hot, with all of the usual networking suspects (Cisco, VMware, AT&T, Citrix, etc.) staking a claim. But make no mistake, this is a market sector that was built, defined, and refined by startups.A few early movers have already been taken off the table, snatched up by incumbents seeking to modernize their networking portfolios: Cisco acquired Viptela for $610 million; VMware bought VeloCloud for an estimated $449 million; NTT purchased Virtela for $525 million, and Riverbed, which was a leader in the precursor WAN optimization space, acquired Ocedo (price undisclosed) to help it manage its transition to a software-defined future.To read this article in full, please click here