Archive

Category Archives for "Networking"

Day Two Cloud 013: To Do Cloud Right, Leave Data Center Thinking Behind

Organizations don't have to be convinced to adopt the cloud these days. The conversation now is about how to do it right. Guest Dwayne Monroe joins the Day Two Cloud podcast to talk about how to change your thinking about cloud in terms of resource sizing, cost, staff training, service availability, app refactoring and much more.

The post Day Two Cloud 013: To Do Cloud Right, Leave Data Center Thinking Behind appeared first on Packet Pushers.

Cloudflare’s Karma, Managing MSPs, & Agile Security

Cloudflare come out strong, pointing the finger at Verizon for shoddy practices putting the Internet at risk. It didn’t take long for karma to come around and for Cloudflare to have their own Internet impacting outage from a mistake of their own. In this episode we talk about that outage, the risk of centralization on the Internet, managing MSPs when trouble strikes, and whether or not agile processes are forgoing security in favor of faster releases.

 

Darrel Clute
Guest
Jed Casey
Guest
Jordan Martin
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post Cloudflare’s Karma, Managing MSPs, & Agile Security appeared first on Network Collective.

The Titan supercomputer is being decommissioned: a costly, time-consuming project

A supercomputer deployed in 2012 is going into retirement after seven years of hard work, but the task of decommissioning it is not trivial.The Cray XK7 “Titan” supercomputer at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL) is scheduled to be decommissioned on August 1 and disassembled for recycling.At 27 petaflops, or 27 quadrillion calculations per second, Titan was at one point the fastest supercomputer in the world at its debut in 2012 and remained in the top 10 worldwide until June 2019.[ Also read: 10 of the world's fastest supercomputers | Get regularly scheduled insights: Sign up for Network World newsletters ] But time marches on. This beast is positively ancient by computing standards. It uses 16-core AMD Opteron CPUs and Nvidia Kepler generation processors. You can buy a gaming PC with better than that today.To read this article in full, please click here

The Titan supercomputer is being decommissioned: a costly, time-consuming project

A supercomputer deployed in 2012 is going into retirement after seven years of hard work, but the task of decommissioning it is not trivial.The Cray XK7 “Titan” supercomputer at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL) is scheduled to be decommissioned on August 1 and disassembled for recycling.At 27 petaflops, or 27 quadrillion calculations per second, Titan was at one point the fastest supercomputer in the world at its debut in 2012 and remained in the top 10 worldwide until June 2019.[ Also read: 10 of the world's fastest supercomputers | Get regularly scheduled insights: Sign up for Network World newsletters ] But time marches on. This beast is positively ancient by computing standards. It uses 16-core AMD Opteron CPUs and Nvidia Kepler generation processors. You can buy a gaming PC with better than that today.To read this article in full, please click here

A gentle introduction to Linux Kernel fuzzing

A gentle introduction to Linux Kernel fuzzing

For some time I’ve wanted to play with coverage-guided fuzzing. Fuzzing is a powerful testing technique where an automated program feeds semi-random inputs to a tested program. The intention is to find such inputs that trigger bugs. Fuzzing is especially useful in finding memory corruption bugs in C or C++ programs.

A gentle introduction to Linux Kernel fuzzing

Image by Patrick Shannon CC BY 2.0

Normally it's recommended to pick a well known, but little explored, library that is heavy on parsing. Historically things like libjpeg, libpng and libyaml were perfect targets. Nowadays it's harder to find a good target - everything seems to have been fuzzed to death already. That's a good thing! I guess the software is getting better! Instead of choosing a userspace target I decided to have a go at the Linux Kernel netlink machinery.

Netlink is an internal Linux facility used by tools like "ss", "ip", "netstat". It's used for low level networking tasks - configuring network interfaces, IP addresses, routing tables and such. It's a good target: it's an obscure part of kernel, and it's relatively easy to automatically craft valid messages. Most importantly, we can learn a lot about Linux internals in the process. Bugs in netlink aren't going Continue reading

Will rolling into IBM be the end of Red Hat?

IBM's acquisition of Red Hat for $34 billion is now a done deal, and statements from the leadership of both companies sound extremely promising. But some in the Linux users have expressed concern.Questions being asked by some Linux professionals and devotees include: Will Red Hat lose customer confidence now that it’s part of IBM and not an independent company? Will IBM continue putting funds into open source after paying such a huge price for Red Hat? Will they curtail what Red Hat is able to invest? Both companies’ leaders are saying all the right things now, but can they predict how their business partners and customers will react as they move forward? Will their good intentions be derailed? Part of the worry simply comes from the size of this deal. Thirty-four billion dollars is a lot of money. This is probably the largest cloud computing acquisition to date. What kind of strain will that price tag put on how the new IBM functions going forward? Other worries come from the character of the acquisition – whether Red Hat will be able to continue operating independently and what will change if they cannot. In addition, a few Linux devotees hark Continue reading

Wireless alliance: You might want to move some access points for Wi-Fi 6

Businesses could find themselves repositioning wireless access points and even facing increased bandwidth demands as Wi-Fi 6 hits the market in the coming months, according to a white paper released today by the Wireless Broadband Alliance.Nevertheless, the news is mostly good for prospective business users. Thanks to Wi-Fi 6’s array of coexistence, power-saving and smart management features, a new network based on the technology shouldn’t pose many deployment problems.The time of 5G is almost here Key to the enterprise WLAN use case, the white paper says, is deployment planning – Wi-Fi 6 can offer different optimal placement options than previous-generation Wi-Fi, so it could behoove upgraders to consider changing AP locations, instead of just swapping out existing devices in the same locations.To read this article in full, please click here

Heavy Networking 458: SDN Federation – One Controller To Rule Them All?

You might have any number of software controllers in your infrastructure: one for wireless, one for SD-WAN, one in the data center, one for security, and so on. Would it be useful to federate these controllers? Can we expect the industry to produce a controller of controllers? Is this even a good idea? Today's Heavy Networking podcast ponders these questions with guest Rob Sherwood.

The post Heavy Networking 458: SDN Federation – One Controller To Rule Them All? appeared first on Packet Pushers.

IBM closes $34B Red Hat deal, vaults into multi-cloud

IBM has finalized its $34 billion purchase of Red Hat and says it will use the Linux powerhouse's open-source know-how to enable larger scale customer projects and to create a web of partnerships to simplify carrying them out."A lot of our mutual clients are interested in doing a lot more," says Arvind Krishna, Senior Vice President, IBM Cloud & Cognitive Software in a blog post. "Many see this as an opportunity for us to create large industry ecosystems with other providers who are optimized on this common infrastructure. ...If Red Hat were to do this on their own, there would be a limit to how much they can scale. Together, we can put a lot more resources into optimizing other partners."To read this article in full, please click here