Omer asked a pretty common question about BFD on one of my blog posts (slightly reworded):
Would you still use BFD even if you have a direct router-to-router physical link without L2 transport in the middle to detect if there is some kind of software failure on the other side?
Sander Steffann quickly replied:
Read more ...
At the outset, a 1200 word article about saving configuration sounds strange. It would perhaps be perfectly normal if the topic was Vi and not Ansible, however there’s a reason for this and its simply speed and itempotency. Saving the configuration in the “wrong” way can take quite a lot of time and one reason for network automation is to accomplish tasks faster and constantly search for ways to improve your processes. This article assumes that you are running Ansible 2.4, but it should work in a similar way regardless.
Continue reading
The most important part of writing quality software is testing. Writing unit tests provide assurance the changes you’re making aren’t going to break anything in your software application. Sounds pretty great, right? Why is it that in networking operations we’re still mainly using ping, traceroute, and human verification for network validation and testing?
I’ve written in the past that deploying configurations faster, or more generally, configuration management, is just one small piece of what network automation is. A major component much less talked about is automated testing. Automated testing starts with data collection and quickly evolves to include verification. It’s quite a simple idea and one that we recommend as the best place to start with automation as it’s much more risk adverse to deploying configurations faster.
In our example, the network is the application, and unit tests need to be written to verify our application (as network operators) has valid configurations before each change is implemented, but also integrations tests are needed to ensure our application is operating as expected after each change.
If you choose to go down the DIY path for network automation, which could involve using an open source Continue reading
The most important part of writing quality software is testing. Writing unit tests provide assurance the changes you’re making aren’t going to break anything in your software application. Sounds pretty great, right? Why is it that in networking operations we’re still mainly using ping, traceroute, and human verification for network validation and testing?
I’ve written in the past that deploying configurations faster, or more generally, configuration management, is just one small piece of what network automation is. A major component much less talked about is automated testing. Automated testing starts with data collection and quickly evolves to include verification. It’s quite a simple idea and one that we recommend as the best place to start with automation as it’s much more risk adverse to deploying configurations faster.
In our example, the network is the application, and unit tests need to be written to verify our application (as network operators) has valid configurations before each change is implemented, but also integrations tests are needed to ensure our application is operating as expected after each change.
If you choose to go down the DIY path for network automation, which could involve using an open source Continue reading
The Internet Society (Aftab Siddiqui) and APNIC (Tashi Phuntsho) jointly conducted a Network Security Workshop in Port Moresby, Papua New Guinea (PNG) on 3-5 October 2017. This was arranged for current and potential members of the first neutral Internet Exchange Point (IX) in the country called PNG-IX, at the request of NICTA – the National Information and Communications Technology Authority – a government agency responsible for the regulation and licensing of Information Communication Technology (ICT) in Papua New Guinea. NICTA is also a key partner in establishing the Internet Exchange in PNG.
This first half of Day 1 (3 October) was dedicated to the PNG-IX awareness., such the role of an IX, how it works, why an IX has been established in PNG and why everyone should peer in order to achieve both short- and long-term benefits to the local Internet ecosystem. NICTA CEO Charles Punaha, NICTA Director Kila Gulo Vui, and APNIC Development Director Che-Hoo Cheng shared their views
There were more than 40 participants in the Network Security workshop, with diverse backgrounds ranging from enterprise environments, state universities, financial institutions, telcos and ISPS. The training alumni completed lab work and learned about important security topics such as Continue reading
There are several good nuggets that can be found in this Ted Talk. I particularly like the impetus on local communities and the idea of a Business Plan contest for high school students.
Innovation: Five Steps to Get Your Local Economy Back to the Future
Share your crazy ideas for building local economies and enabling one another by commenting below.
Disclaimer: This article includes the independent thoughts, opinions, commentary or technical detail of Paul Stewart. This may or may does not reflect the position of past, present or future employers.
Readers of this article may also enjoy:
Because of the Infineon Disaster of 2017 lots of TPM and Yubikey keys have to be regenerated.
I have previously blogged about how to create these keys inside the yubikey, so here’s just the short version of how to redo it by generating the key in software and importing it into the yubikey.
When it appears to stall, that’s when it’s waiting for a touch.
openssl genrsa -out key.pem 2048
openssl rsa -in key.pem -outform PEM -pubout -out public.pem
yubico-piv-tool -s 9a -a import-key --touch-policy=always -i key.pem
yubico-piv-tool -a verify-pin -a selfsign-certificate -s 9a -S '/CN=my SSH key/' -i public.pem -o cert.pem
yubico-piv-tool -a import-certificate -s 9a -i cert.pem
rm key.pem public.pem cert.pem
ssh-keygen -D /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -e
Delete all mentions of previous key. It’s good to have a disaster plan ahead of time if keys need to be replaced, but if you don’t have one:
~/.ssh/authorized_keys
)Because of the Infineon Disaster of 2017 lots of TPM and Yubikey keys have to be regenerated.
I have previously blogged about how to create these keys inside the yubikey, so here’s just the short version of how to redo it by generating the key in software and importing it into the yubikey.
When it appears to stall, that’s when it’s waiting for a touch.
openssl genrsa -out key.pem 2048
openssl rsa -in key.pem -outform PEM -pubout -out public.pem
yubico-piv-tool -s 9a -a import-key --touch-policy=always -i key.pem
yubico-piv-tool -a verify-pin -a selfsign-certificate -s 9a -S '/CN=my SSH key/' -i public.pem -o cert.pem
yubico-piv-tool -a import-certificate -s 9a -i cert.pem
rm key.pem public.pem cert.pem
ssh-keygen -D /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -e
Delete all mentions of previous key. It’s good to have a disaster plan ahead of time if keys need to be replaced, but if you don’t have one:
~/.ssh/authorized_keys
)Between 11:09 and 11:27 UTC traffic for many large CDN was rerouted through Brazil. Below an example for the Internet’s most famous prefix 8.8.8.0/24 (Google DNS)Some Google services seem to have been hijacked for roughly 15 minutes. Seen anything? @atoonk @bgpmon @bgpstream
— Fusl Neko Shy Dash (@OhNoItsFusl) October 21, 2017
MTR: https://t.co/RyCoE7zMld pic.twitter.com/DCT2JpKgsc
I was out at Gartner Catalyst in London in September, speaking to IT professionals about their data center deployments. It was an enjoyable time engaging actively with other like-minded technical individuals that were interested in leveraging the boundaries of their technologies to drive greater business efficiencies and competitiveness.
The common theme across all the attendees I spoke to was the urge for containerization, flexibility of design and rapid deployment. These IT professionals were being tasked with reacting faster, and building more rapidly scalable environment. For their server and application needs, they all had turned to open solutions in Linux, leveraging operating systems such as Red Hat Enterprise Linux, Centos, Ubuntu, and orchestration tools such as Mesos and Docker Swarm to control Docker containers. The common point I saw was that all the compute infrastructure relied on open solutions that allowed for greater simplicity without sacrificing flexibility.
I would then ask these same IT professionals: “what do you use in for network infrastructure in these data centers?”
Universally, the response would come back: “Cisco” or “Arista” or “Juniper.”
I would push them: “Why?”
“Because it’s what we’ve always done.”
“It’s all we know.”
“No one ever Continue reading
For the last year I have been working a lot with IWAN which is Cisco’s SD-WAN implementation (before Viptela acquisition).
One of the important aspects of SD-WAN is to be able to load balance the traffic. Load balancing traffic is not trivial in all situations though. Why not?
If you have a site where you have two MPLS circuits or two internet circuits and they both have the same amount of bandwidth, then things are simple. Or at least, relatively simple. Let’s say that you have a site with two 100 Mbit/s internet circuits. This means that we can do equal cost multi pathing (ECMP). If a flow ends up on link A or link B doesn’t matter. The flow will have an equal chance of utilizing as much bandwidth as it needs on either link. Now, there are still some things we need to consider even in the case of ECMP.
The size of flows – Some flows are going to be much larger than others, such as transfering files through CIFS or other protocols, downloading something from the internet versus something like Citrix traffic which is generally smaller packets and don’t consume a lot of bandwidth.
The number Continue reading
Containers are expected to see an adoption surge next year.
It’s only four days since we were blessed with news of the KRACK vulnerability in WPA2, so what have we learned now that we’ve had some time to dig into the problem?
In terms of patching wireless access points the good news is that most of the enterprise vendors at least are on the ball and have either released patches, have them in testing, or have at least promised them in the near future. While one of the primary victims of KRACK in these devices is 802.11r (Fast Roaming) which is not likely to be used in most home environments, it’s more common to see repeater or mesh functionality in the home, and because the AP acts as a wireless client in these cases, it is susceptible to the vulnerability. So if you just have a single AP in the home, chances are that updating the firmware because of KRACK is not that urgent. That’s probably a good thing given the number of wireless access points embedded in routers managed by internet providers, running on old and unsupported hardware, or created by vendors who are no longer in business.
The clients are where Continue reading
Cloud security threats are "moving up the stack."
GE partners with Apple on IoT; Intel invests $60 million in 15 technology startups; Alibaba works with Red Hat.
Company is accelerating its $10 billion cost restructuring plan.