AI and the Future of Network Security
IT teams are inundated by security alerts, but artificial intelligence can help manage threats.
IT teams are inundated by security alerts, but artificial intelligence can help manage threats.
on November 8, oVirt 4.2 saw the introduction of an important behind-the-scenes enhancement.
The change is associated with the exchange of information between the engine and the VDSM. It addresses the issue of multiple abstraction layers, with each layer needing to convert its input into a suitably readable format in order to report to the next layer.
This change improves data communication between the engine and Libvirt - the tool that manages platform virtualization.
Previously, the configuration file for a newly created virtual machine (VM) originated in the engine as a map or dictionary. Then, in the VDSM, it was converted into an XML file that was readable by Libvirt. This process required a greater coding effort which in turn slowed down the development process.
Now, this map or dictionary has been replaced by engine XML, an XML configuration file that complies with the Libvirt API. VDSM now simply routes this Libvirt-readable file to/from Libvirt, in VM lifecycle (virt) related flows.
As an oVirt user, it’s business as usual.
However, if you are a developer dealing with debugging issues that involve running a VM, simply be aware that the Domain XML is now generated by ovirt-engine, Continue reading
It’s always great to see students enrolled in Building Network Automation Solutions online course using ideas from my sample playbooks to implement a wonderful solution that solves a real-life problem.
James McCutcheon did exactly that: he took my LLDP-to-Graph playbook and used it to graph VLANs stretching across multiple switches (and provided a good description of his solution).
The Internet of Things (IoT) is a major buzzword around the Internet industry and the broader technology and innovation business arenas. We are often asked what the IETF is doing in relation to IoT and in this Rough Guide to IETF 100 post I’d like to highlight some of the relevant sessions scheduled during the upcoming IETF 100 meeting in Singapore. Check out the IETF Journal IoT Category, the Internet Society’s IoT page, or the Online Trust Alliance IoT page for more details about many of these topics.
The Thing-to-Thing Research Group (T2TRG) investigates open research issues in turning the IoT into reality. The research group will be holding a half-day joint meeting with the Open Connectivity Foundation (OCF) on the Friday before IETF, and they will also be meeting on Tuesday afternoon in Singapore to report out on their recent activities. Included on the agenda is the upcoming Workshop on Decentralized IoT Security and Standards (DISS). This workshop will be held in conjunction with the Network and Distributed System Security (NDSS) Symposium on 18 February 2018 in San Diego, CA, USA. The DISS workshop will gather researchers and the open standards community together to help address Continue reading
Larger operators are letting AT&T lead, for now.
It is going to be a busy week for chip maker Qualcomm as it formally jumps from smartphones to servers with its new “Amberwing” Centriq 2400 Arm server processor during the same week that it has received an unsolicited $130 billion takeover offer from sometimes rival chipmaker Broadcom.
The Centriq 2400 is the culmination of over four years of work and investment, which according to the experts in the semiconductor industry we have talked to, easily took on the order of $100 million to $125 million to make happen – remember there was a prototype as well as the …
Qualcomm’s Amberwing Arm Server Chip Finally Takes Flight was written by Timothy Prickett Morgan at The Next Platform.
One of the arguments Intel officials and others have made against Arm’s push to get its silicon designs into the datacenter has been the burden it would mean for enterprises and organizations in the HPC field that would have to modify application codes to get their software to run on the Arm architecture.
For HPC organizations, that would mean moving the applications from the Intel-based and IBM systems that have dominated the space for years, a time-consuming and possibly costly process.
Arm officials over the years have acknowledged the challenge, but have noted their infrastructure’s embrace of open-source software and …
Arm Smooths the Path for Porting HPC Apps was written by Nicole Hemsoth at The Next Platform.
One of the nicer perks I have here at Cloudflare is access to the latest hardware, long before it even reaches the market.
Until recently I mostly played with Intel hardware. For example Intel supplied us with an engineering sample of their Skylake based Purley platform back in August 2016, to give us time to evaluate it and optimize our software. As a former Intel Architect, who did a lot of work on Skylake (as well as Sandy Bridge, Ivy Bridge and Icelake), I really enjoy that.
Our previous generation of servers was based on the Intel Broadwell micro-architecture. Our configuration includes dual-socket Xeons E5-2630 v4, with 10 cores each, running at 2.2GHz, with a 3.1GHz turbo boost and hyper-threading enabled, for a total of 40 threads per server.
Since Intel was, and still is, the undisputed leader of the server CPU market with greater than 98% market share, our upgrade process until now was pretty straightforward: every year Intel releases a new generation of CPUs, and every year we buy them. In the process we usually get two extra cores per socket, and all the extra architectural features such upgrade brings: hardware AES and CLMUL in Westmere, Continue reading
This moves Cumulus into the data center interconnect market.
Not all BT customers want a future with Cisco, but many do.
Past versions were buckling under growing demand.
We’re thrilled to announce that Facebook has partnered with Cumulus Networks to bring you the industry’s first open optical routing platform loaded with Cumulus Linux. That’s right, Cumulus Networks is branching into some exciting new territory (a new voyage… if you will). We couldn’t be more honored and excited to work closely with Facebook to bring scalability and cost-effective hardware and software to the optical space — an industry that is growing rapidly.
Bandwidth for Internet services is becoming a more tangible challenge every single day, but the current proprietary solutions are too expensive and do not scale. As Facebook explained, “the highest-performing ‘bandwidth and reach’ are still fiber-based technologies — in particular, switching, routing, and transport DWDM technologies.” With the popularity of services that require a lot of bandwidth, like VR and video, there has become a critical need for better backhaul infrastructure that is cost-effective and scalable and supports high-performing wireless connectivity. The issue becomes even more critical when considering a variety of geographic conditions. For instance, rural regions need long backhaul pipes, which is cost-prohibitive.
That’s where Voyager comes in. Voyager was designed to bring the Internet to everyone — from dense urban locations to remote Continue reading