Juniper Nxtwork 2018 – Recap
The other week I had the privilege of attending Juniper Nxtwork 2018 in Las Vegas, NV. I have attended the …
The post Juniper Nxtwork 2018 – Recap appeared first on Fryguy's Blog.
The other week I had the privilege of attending Juniper Nxtwork 2018 in Las Vegas, NV. I have attended the …
The post Juniper Nxtwork 2018 – Recap appeared first on Fryguy's Blog.


It can be a big deal for Internet users when Cloudflare rolls into town. After our recent Mongolia launch, we received lots of feedback from happy customers that all of a sudden, Internet performance noticeably improved.
As a result, it's not a surprising that we regularly receive requests from all over the world to either peer with our network, or to host a node. However, potential partners are always keen to know just how much traffic will be served over that link. What performance benefits can end-users expect? How much upstream traffic will the ISP save? What new bandwidth will they have available for traffic management?
Starting today, ISPs and hosting providers can request a login to the Cloudflare Peering Portal to find the answers to these questions. After validating ownership of your ASN, the Cloudflare network team will provide a login to the newly launched Peering Portal - Beta. You can find more information at: cloudflare.com/partners/peering-portal/
If you're new to the core infrastructure of the Internet, the best way to understand peering is to frame the problems it solves:
Would you recommend your SIP trunking provider to a friend? See which providers came out on top in the Eastern Management Group's customer satisfaction study that surveyed more than 3,000 IT managers.
This blog post was initially sent to subscribers of my mailing list. Subscribe here.
Following on his previous work with Cisco ACI Dirk Feldhaus decided to create an Ansible playbook that would create and configure a new tenant and provision a vSRX firewall for the tenant when working on the Create Network Services hands-on exercise in the Building Network Automation Solutions online course.
Read more ...LegoOS: a disseminated, distributed OS for hardware resource disaggregation Shan et al., OSDI’18
One of the interesting trends in hardware is the proliferation and importance of dedicated accelerators as general purposes CPUs stopped benefitting from Moore’s law. At the same time we’ve seen networking getting faster and faster, causing us to rethink some of the trade-offs between local I/O and network access. The monolithic server as the unit of packaging for collections of such devices is starting to look less attractive:

To fully support the Continue reading
When a process writes to a socket that has received an RST, the SIGPIPE signal is sent to the process. The default action of this signal is to terminate the process, so the process must catch the signal to avoid being involuntarily terminated.
On a BGP-routed network with multiple redundant paths, we seek to achieve two goals concerning reliability:
A failure on a path should quickly bring down the related BGP sessions. A common expectation is to recover in less than a second by diverting the traffic to the remaining paths.
As long as a path is operational, the related BGP sessions should stay up, even under duress.
To quickly detect a failure, BGP can be associated with BFD, a protocol to detect faults in bidirectional paths,1 defined in RFC 5880 and RFC 5882. BFD can use very low timers, like 100 ms.
However, when BFD runs in a process on top of a generic kernel,2 notably when running BGP on the host, it is not unexpected to loose a few BFD packets on adverse conditions: the daemon handling the BFD sessions may not get enough CPU to answer in a timely manner. In this scenario, it is not unlikely for all the BGP sessions to go down at the same time, creating an outage, as depicted in the last case in the diagram below.
Back in July of this year I introduced Polyglot, a project whose only purpose is to provide a means for me to learn more about software development and programming (areas where am I sorely lacking real knowledge). In the limited spare time I’ve had to work on Polyglot in the ensuing months, I’ve been building out an API specification using RAML, and in this post I’ll share how I use Docker and a Docker image to validate my RAML files.
Since I was (am) using Visual Studio Code as my primary text editor/development environment these days, I started out by looking for a RAML extension that would provide some sort of linting/validation functionality. I found an extension to do RAML syntax highlighting, which seemed like a reasonable first step.
After a bit more research, I found that there was a raml-cli NPM package that one could use to validate RAML files from the command line. I was a bit leery of installing an NPM package on my system, so I thought, “Why not use a Docker container for this?” It will keep my system clean of excess/unnecessary packages and dependencies, and it will provide some practice with Continue reading
consolidated posts from the VMware on VMware blog
Are you someone that prefers a blank sheet of paper or an empty text pad screen? Do you get the time to have that thought process to create the words, images or code to fill that empty space? Yes to both — I’m impressed! Creating something from scratch is an absolutely magical feeling especially once it gets to a point of sharing or usefulness. However, many of us spend a bit more of our time editing, building upon or debugging. Fortunately, that can be pretty interesting as well.
In the case of setting up mico-segmentation with VMware NSX Data Center, you have a couple options on quickly getting started:
Those resources and more are great jumping off points especially since you likely have more than just Informatica, Oracle and SAP apps in your environments.
Now, should you have those Informatica, Oracle and SAP apps, then here’s the next level of details. I’m Continue reading
SDxCentral Weekly Wrap for October 19, 2018. Ericsson's third quarter results get overshadowed by an ongoing corruption scandal.
The platform allows enterprise customers to deploy and manage applications and services that reside in the carrier’s private cloud. It comes with a 99.9 percent SLA for uptime.