Archive

Category Archives for "Networking"

Cisco bets $660M on silicon-photonics firm Luxtera

Cisco says it is buying optical-semiconductor firm Luxtera for $660 million and will build its silicon photonics into future enterprise data-center, webscale, and service-provider networking gear.This photonic technology is essential to keep up with projected massive increases in IP traffic volume over the next four years, according to Cisco's networking chief."Optics is a fundamental technology to enable this future. Coupled with our silicon and optics innovation, Luxtera will allow our customers to build the biggest, fastest and most efficient networks in the world," said David Goeckeler, executive vice president and general manager, Networking and Security Business at CiscoTo read this article in full, please click here

Cisco bets $660M on silicon-photonics firm Luxtera

Cisco says it is buying optical-semiconductor firm Luxtera for $660 million and will build its silicon photonics into future enterprise data-center, webscale, and service-provider networking gear.This photonic technology is essential to keep up with projected massive increases in IP traffic volume over the next four years, according to Cisco's networking chief."Optics is a fundamental technology to enable this future. Coupled with our silicon and optics innovation, Luxtera will allow our customers to build the biggest, fastest and most efficient networks in the world," said David Goeckeler, executive vice president and general manager, Networking and Security Business at CiscoTo read this article in full, please click here

Cisco bets $660M on silicon-photonics firm Luxtera

Cisco says it is buying optical-semiconductor firm Luxtera for $660 million and will build its silicon photonics into future enterprise data-center, webscale, and service-provider networking gear.This photonic technology is essential to keep up with projected massive increases in IP traffic volume over the next four years, according to Cisco's networking chief."Optics is a fundamental technology to enable this future. Coupled with our silicon and optics innovation, Luxtera will allow our customers to build the biggest, fastest and most efficient networks in the world," said David Goeckeler, executive vice president and general manager, Networking and Security Business at CiscoTo read this article in full, please click here

Open Source Containers in 2019

Open source containers are moving in a direction that many of us never anticipated. Long recognized as providing an effective way to package applications with all of their required components, some are also tackling one of the most challenging areas in the compute world today -- high performance computing or "HPC". And while containers can bring a new level of efficiency to the world of HPC, they're also presenting new ways of working for enterprise IT organizations that are running HPC-like jobs.How containers work Containers offer many advantages to organizations seeking to distribute applications. By incorporating an application's many dependencies (libraries, etc.) into self-sustainable images, they avoid a lot of installation problems. The differences in OS distributions have no impact, so separate versions of applications don't have to be prepared and maintained, thus making developers' work considerably easier.To read this article in full, please click here

Open-source containers move toward high-performance computing

Open-source containers are moving in a direction that many of us never anticipated.Long recognized as providing an effective way to package applications with all of their required components, some are also tackling one of the most challenging areas in the compute world today — high-performance computing (HPC). And while containers can bring a new level of efficiency to the world of HPC, they're also presenting new ways of working for enterprise IT organizations that are running HPC-like jobs.How containers work Containers offer many advantages to organizations seeking to distribute applications. By incorporating an application's many dependencies (libraries, etc.) into self-sustainable images, they avoid a lot of installation problems. The differences in OS distributions have no impact, so separate versions of applications don't have to be prepared and maintained, thus making developers' work considerably easier.To read this article in full, please click here

Space data backbone gets U.S. approval

Soon we may have a space-based optical backbone capable of transferring data 1.5 times faster than Earth-based terrestrial fiber, now that the Federal Communications Commission (FCC) has given LeoSat the go-ahead to start its build-out.Moving “large quantities of data quickly and securely around the world, is fast outpacing the infrastructure in place to carry it,” says LeoSat in a press release announcing its FCC market-access grant last month. Upcoming LeoSat, will be a “a backbone in space for global business,” the company says.To read this article in full, please click here

Space data backbone gets U.S. approval

Soon we may have a space-based optical backbone capable of transferring data 1.5 times faster than Earth-based terrestrial fiber, now that the Federal Communications Commission (FCC) has given LeoSat the go-ahead to start its build-out.Moving “large quantities of data quickly and securely around the world, is fast outpacing the infrastructure in place to carry it,” says LeoSat in a press release announcing its FCC market-access grant last month. Upcoming LeoSat, will be a “a backbone in space for global business,” the company says.To read this article in full, please click here

Listing TOP 5 Processes – Top command

Continuing some exploration of Pandas, I realized in networking we often has to deal with Toptalkers, I dont have any Networking Realted Top Talker IP Data as such but wanted to see if this can checked on my Laptop’s current Processes comsuming CPU and Top processes which are repeated often.

Without dragging the topic

-> Took the text file it was delimited with space (TOP Command will generally Delimit)

-> Converted to Pandas Read FWF and then converted the file to CSV

-> Used CSV to read into specific %CPU coloumn and implemented SORT function in descending order.

-> Finally Took Counter from Collections Module and implemented it on the list.

output looksl something like this

Pandas are effective and easy, I will continue to explore some functions so as to build some scripts in day to day activities.

 

-Rakesh

 

Automation, Big Data and AI

The final topic David Gee and Christoph Jaggi mentioned in their interview was big data and AI (see also: automated workflows, hygiene of network automation and network automation security):

Two other concurrent buzzwords are big data and artificial intelligence. Can they be helpful for automation?

Big Data can provide a rich pool of event-sourcing information and, as infrastructures get more complex, it’s essential that automation triggers are as accurate as possible.

Read more ...

Microsoft Storage Spaces Is Hot Garbage For Parity Storage

I love parity storage. Whether it’s traditional RAID 5/6, erasure coding, raidz/raid2z, whatever. It gives you redundancy on your data without requiring double the drives that mirroring or mirroring+stripping would require.

The drawback is write performance is not as good as mirroring+stripping, but for my purposes (lots of video files, cold storage, etc.) parity is perfect.

In my primary storage array, I use double redundancy on my parity, so effectively N+2. I can lose any 2 drives without losing any data.

I had a simple Storage Spaces mirror on my Windows 10 Pro desktop which consisted of (2) 5 TB drives using ReFS. This had four problems:

  • It was getting close to full
  • The drives were getting old
  • ReFS isn’t support anymore on Windows 10 Pro (need Windows 10 Workstation)
  • Dropbox (which I use extensively) is dropping support for ReFS-based file systems.

ReFS had some nice features such as checksumming (though for data checksumming, you had to turn it on), but given the type of data I store on it, the checksumming isn’t that important (longer-lived data is stored either on Dropbox and/or my ZFS array). I do require Dropbox, so back to NTFS it is.

I Continue reading

New chip techniques are needed for the new computing workloads

Over the next two to three years, we will see an explosion of new complex processors that not only do the general-purpose computing we commonly see today (scalar and vector/graphics processing), but also do a significant amount of matrix and spatial data analysis (e.g., augmented reality/virtual reality, visual response systems, artificial intelligence/machine learning, specialized signal processing, communications, autonomous sensors, etc.).In the past, we expected all newer-generation chips to add features/functions as they were being designed. But that approach is becoming problematic. As we scale Moore’s Law closer to the edge of physical possibility (from 10nm to 7, then 5), it becomes increasingly lengthy and costly to perfect the new processes. What was generally about 12 months between processing improvement steps now is closer to two years, and newer process factories can cost upwards of $10 billion or more.To read this article in full, please click here

New chip techniques are needed for the new computing workloads

Over the next two to three years, we will see an explosion of new complex processors that not only do the general-purpose computing we commonly see today (scalar and vector/graphics processing), but also do a significant amount of matrix and spatial data analysis (e.g., augmented reality/virtual reality, visual response systems, artificial intelligence/machine learning, specialized signal processing, communications, autonomous sensors, etc.).In the past, we expected all newer-generation chips to add features/functions as they were being designed. But that approach is becoming problematic. As we scale Moore’s Law closer to the edge of physical possibility (from 10nm to 7, then 5), it becomes increasingly lengthy and costly to perfect the new processes. What was generally about 12 months between processing improvement steps now is closer to two years, and newer process factories can cost upwards of $10 billion or more.To read this article in full, please click here

Real World Serverless: Serverless Use Cases and Best Practices

Real World Serverless: Serverless Use Cases and Best Practices

Cloudflare Workers has had a very busy 2018. Throughout the year, Workers moved from beta to general availability, continued to expand its footprint as Cloudflare grew to 155 locations, and added new features and services to help developers create increasingly advanced applications.

To cap off 2018 we decided hit the road (and then head to the airport) with our Real World Serverless event series in San Francisco, Austin, London, Singapore, Sydney, and Melbourne. It was a great time sharing serverless application development insights we’ve discovered over the past year as well as demonstrating how to build applications with new services like our key value store, Cloudflare Workers KV.

Below is a recording from our Singapore Real World Serverless event. It included three talks about Serverless technology featuring Tim Obezuk, Stanley Tan, and Remy Guercio from Cloudflare. They spoke about the fundamentals of serverless technology, twelve factors of serverless application development, and achieving no ops at scale with network-based serverless.

If you’d like to join us in person to talk about serverless, we’ll be announcing 2019 event locations starting in the new year.

About the talks

Fundamentals of Serverless Technology - Tim Obezuk (0:00-13:56)

Tim explores the anatomy of Continue reading