U.S. Congressman Bashes ZTE and Huawei in New Bill
The bill would prohibit the U.S. government from buying from the companies.
The bill would prohibit the U.S. government from buying from the companies.
Fox-IT is recommending that IPv6 is disabled when it is not being used, as disabling Proxy Auto Detection. This of course means that Windows-based hosts are unable to switch preference to IPv6 when it is available (which all versions since Windows Vista will do), and that IPv6 would need to be explicitly re-enabled on hosts.
The article makes some important points, but IPv4 and IPv6 are fundamentally incompatible on a wire level and it needs to be understood they can’t communicate with each other except through translation devices. There are a number of known issues (including this one) with the security of automatic configuration mechanisms running on Local Area Networks, both under IPv6 and IPv4, but these require physical access to Continue reading
Here at Cloudflare, we have a lot of experience of operating servers on the wild Internet. But we are always improving our mastery of this black art. On this very blog we have touched on multiple dark corners of the Internet protocols: like understanding FIN-WAIT-2 or receive buffer tuning.
CC BY 2.0 image by Isaí Moreno
One subject hasn't had enough attention though - SYN floods. We use Linux and it turns out that SYN packet handling in Linux is truly complex. In this post we'll shine some light on this subject.
First we must understand that each bound socket, in the "LISTENING" TCP state has two separate queues:
In the literature these queues are often given other names such as "reqsk_queue", "ACK backlog", "listen backlog" or even "TCP backlog", but I'll stick to the names above to avoid confusion.
The SYN Queue stores inbound SYN packets[1] (specifically: struct inet_request_sock
). It's responsible for sending out SYN+ACK packets and retrying them on timeout. On Linux the number of retries is configured with:
$ sysctl net.ipv4.tcp_synack_retries
net.ipv4.tcp_synack_retries = 5
One of my readers sent me this question:
Do you have any thoughts on this meltdown HPTI thing? How does a hardware issue/feature become a software vulnerability? Hasn't there always been an appropriate level of separation between kernel and user space?
There’s always been privilege-level separation between kernel and user space, but not the address space separation - kernel has been permanently mapped into the high-end addresses of user space (but not visible from the user-space code on systems that had decent virtual memory management hardware) since the days of OS/360, CP/M and VAX/VMS (RSX-11M was an exception since it ran on 16-bit CPU architecture and its designers wanted to support programs up to 64K byte in size).
Read more ...Nowadays everything is about automation. Organizations are moving away from the traditional static infrastructure to full automation and here the need of NSX is significant. There are many use-cases for NSX, but the common in all of them is that they all need to be automated.
VMware is investing heavenly for different tools to ease the automation aspect of NSX but in order to take full advantage of it one need to understand what happens under the hood. It is also important if someone wants to build their own custom automation tool or CMP (Cloud Management Platform). Many existing solutions like Openstack, Kubernetes, vRO and so on automate NSX-T using different plugins. In fact, those plugins are sending REST API calls to NSX Manager in order to automate logical topology CRUD(Create, Read, Update, Delete) operations.
Based on our experience we decided that NSX-T APIs will be based on JSON format following OpenAPI standard. The use of Open APIs is to enable third party developers to build applications and services around NSX-T by standardising on how REST APIs are described. This means one can use standard tools like Swagger to read and use those APIs. Below is a quick example from my Mac on Continue reading
Elisa Jasinska covered several IPAMs in her overview of open-source network automation tools, and we had Jeremy Stretch talking about NetBox in the Building Network Automation Solutions online course, but if you’re looking for a really robust easy-to-implement solution, check out this document from 1998 (deployment experience, including a large-scale one).
The 3 main elements that run identity awareness under the hub are Active Directory Query (ADQ), PDP and PEP. They all intertwine in some way to allow the different blades of the Checkpoint to track and restrict access based on AD user and machine name. I tested these features as part of a POC and personally I would not consider them fit for purpose in a production environment. See the caveats at the end of the post for more details on this.
The directories that need to be emptied to delete all the logs on the Checkpoint managers.
The platform works with AWS, Azure, and Google, and plans to support Alibaba Cloud later this year.
Today we are excited to announce the expansion of our partnership with the availability of Docker Enterprise Edition (EE), our container management platform on the Cisco Global Price List (GPL) and the release of the latest Cisco Validated Design (CVD):
Now customers can purchase Docker EE directly from Cisco and their joint resellers to jumpstart their new year’s resolution for a more modern application architecture, reduce IT costs and redirect saving to innovation projects. And with our latest CVD for Cisco UCS compute infrastructure with secure container networking fabric, Contiv, we’ve provided a roadmap on how to get started so customers and partners can gain a faster, more reliable and predictable implementation of Docker EE.
For enterprises looking to use Docker’s container management platform but not sure where to start, we can help you take the first step. The Migrating Traditional Applications (MTA) Program, designed for IT operations teams, helps enterprises modernize existing legacy .NET Windows or Java Linux applications without modifying source code or re-architecting the application in just five days with Docker and Cisco Advanced Services. The results have been incredible, with customers saving over 50% on infrastructure costs and Continue reading
In the first part of this blog series, NSX-T: Routing where you need it (Part 1), I discussed how East-West (E-W) routing is completely distributed on NSX-T and how routing is done by the Distributed Router (DR) running as a kernel module in each hypervisor.
In this post, I will explain how North-South (N-S) routing is done in NSX-T and we will also look at the ECMP topologies. This N-S routing is provided by the centralized component of logical router, also known as Service Router. Before we get into the N-S routing or packet walk, let’s define Service Router.
Service Router (SR)
Whenever a service which cannot be distributed is enabled on a Logical Router, a Service Router (SR) is instantiated. There are some services today on NSX-T which are not distributed such as:
1) Connectivity to physical infrastructure
2) NAT
3) DHCP server
4) MetaData Proxy
5) Edge Firewall
6) Load Balancer
Let’s take a look at one of these services (connectivity to physical devices) and see why a centralized routing component makes sense for running this service. Connectivity to physical topology is intended to exchange routing information from NSX domain to external networks (DC, Campus or Continue reading