Day Two Cloud 013: To Do Cloud Right, Leave Data Center Thinking Behind

Organizations don't have to be convinced to adopt the cloud these days. The conversation now is about how to do it right. Guest Dwayne Monroe joins the Day Two Cloud podcast to talk about how to change your thinking about cloud in terms of resource sizing, cost, staff training, service availability, app refactoring and much more.

The post Day Two Cloud 013: To Do Cloud Right, Leave Data Center Thinking Behind appeared first on Packet Pushers.

Cloudflare’s Karma, Managing MSPs, & Agile Security

Cloudflare come out strong, pointing the finger at Verizon for shoddy practices putting the Internet at risk. It didn’t take long for karma to come around and for Cloudflare to have their own Internet impacting outage from a mistake of their own. In this episode we talk about that outage, the risk of centralization on the Internet, managing MSPs when trouble strikes, and whether or not agile processes are forgoing security in favor of faster releases.

 

Darrel Clute
Guest
Jed Casey
Guest
Jordan Martin
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post Cloudflare’s Karma, Managing MSPs, & Agile Security appeared first on Network Collective.

The Titan supercomputer is being decommissioned: a costly, time-consuming project

A supercomputer deployed in 2012 is going into retirement after seven years of hard work, but the task of decommissioning it is not trivial.The Cray XK7 “Titan” supercomputer at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL) is scheduled to be decommissioned on August 1 and disassembled for recycling.At 27 petaflops, or 27 quadrillion calculations per second, Titan was at one point the fastest supercomputer in the world at its debut in 2012 and remained in the top 10 worldwide until June 2019.[ Also read: 10 of the world's fastest supercomputers | Get regularly scheduled insights: Sign up for Network World newsletters ] But time marches on. This beast is positively ancient by computing standards. It uses 16-core AMD Opteron CPUs and Nvidia Kepler generation processors. You can buy a gaming PC with better than that today.To read this article in full, please click here

The Titan supercomputer is being decommissioned: a costly, time-consuming project

A supercomputer deployed in 2012 is going into retirement after seven years of hard work, but the task of decommissioning it is not trivial.The Cray XK7 “Titan” supercomputer at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL) is scheduled to be decommissioned on August 1 and disassembled for recycling.At 27 petaflops, or 27 quadrillion calculations per second, Titan was at one point the fastest supercomputer in the world at its debut in 2012 and remained in the top 10 worldwide until June 2019.[ Also read: 10 of the world's fastest supercomputers | Get regularly scheduled insights: Sign up for Network World newsletters ] But time marches on. This beast is positively ancient by computing standards. It uses 16-core AMD Opteron CPUs and Nvidia Kepler generation processors. You can buy a gaming PC with better than that today.To read this article in full, please click here

A gentle introduction to Linux Kernel fuzzing

A gentle introduction to Linux Kernel fuzzing

For some time I’ve wanted to play with coverage-guided fuzzing. Fuzzing is a powerful testing technique where an automated program feeds semi-random inputs to a tested program. The intention is to find such inputs that trigger bugs. Fuzzing is especially useful in finding memory corruption bugs in C or C++ programs.

A gentle introduction to Linux Kernel fuzzing

Image by Patrick Shannon CC BY 2.0

Normally it's recommended to pick a well known, but little explored, library that is heavy on parsing. Historically things like libjpeg, libpng and libyaml were perfect targets. Nowadays it's harder to find a good target - everything seems to have been fuzzed to death already. That's a good thing! I guess the software is getting better! Instead of choosing a userspace target I decided to have a go at the Linux Kernel netlink machinery.

Netlink is an internal Linux facility used by tools like "ss", "ip", "netstat". It's used for low level networking tasks - configuring network interfaces, IP addresses, routing tables and such. It's a good target: it's an obscure part of kernel, and it's relatively easy to automatically craft valid messages. Most importantly, we can learn a lot about Linux internals in the process. Bugs in netlink aren't going Continue reading

Will rolling into IBM be the end of Red Hat?

IBM's acquisition of Red Hat for $34 billion is now a done deal, and statements from the leadership of both companies sound extremely promising. But some in the Linux users have expressed concern.Questions being asked by some Linux professionals and devotees include: Will Red Hat lose customer confidence now that it’s part of IBM and not an independent company? Will IBM continue putting funds into open source after paying such a huge price for Red Hat? Will they curtail what Red Hat is able to invest? Both companies’ leaders are saying all the right things now, but can they predict how their business partners and customers will react as they move forward? Will their good intentions be derailed? Part of the worry simply comes from the size of this deal. Thirty-four billion dollars is a lot of money. This is probably the largest cloud computing acquisition to date. What kind of strain will that price tag put on how the new IBM functions going forward? Other worries come from the character of the acquisition – whether Red Hat will be able to continue operating independently and what will change if they cannot. In addition, a few Linux devotees hark Continue reading

Wireless alliance: You might want to move some access points for Wi-Fi 6

Businesses could find themselves repositioning wireless access points and even facing increased bandwidth demands as Wi-Fi 6 hits the market in the coming months, according to a white paper released today by the Wireless Broadband Alliance.Nevertheless, the news is mostly good for prospective business users. Thanks to Wi-Fi 6’s array of coexistence, power-saving and smart management features, a new network based on the technology shouldn’t pose many deployment problems.The time of 5G is almost here Key to the enterprise WLAN use case, the white paper says, is deployment planning – Wi-Fi 6 can offer different optimal placement options than previous-generation Wi-Fi, so it could behoove upgraders to consider changing AP locations, instead of just swapping out existing devices in the same locations.To read this article in full, please click here

Three key checklists and remedies for trustworthy analysis of online controlled experiments at scale

Three key checklists and remedies for trustworthy analysis of online controlled experiments at scale Fabijan et al., ICSE 2019

Last time out we looked at machine learning at Microsoft, where we learned among other things that using an online controlled experiment (OCE) approach to rolling out changes to ML-centric software is important. Prior to that we learned in ‘Automating chaos experiments in production’ of the similarities between running a chaos experiment and many other online controlled experiments. And going further back on The Morning Paper we looked at a model for evolving online controlled experiment capabilities within an organisation. Today’s paper choice builds on that by distilling wisdom collected from Microsoft, Airbnb, Snap, Skyscanner, Outreach.io, Intuit, Netflix, and Booking.com into a series of checklists that you can use as a basis for your own processes.

Online Controlled Experiments (OCEs) are becoming a standard operating procedure in data-driven software companies. When executed and analyzed correctly, OCEs deliver many benefits…

The challenge with OCEs though, as we’ve seen before, is that they’re really tricky to get right. When the output of those experiments is guiding product direction, that can be a problem.

Despite their great power Continue reading

Heavy Networking 458: SDN Federation – One Controller To Rule Them All?

You might have any number of software controllers in your infrastructure: one for wireless, one for SD-WAN, one in the data center, one for security, and so on. Would it be useful to federate these controllers? Can we expect the industry to produce a controller of controllers? Is this even a good idea? Today's Heavy Networking podcast ponders these questions with guest Rob Sherwood.

The post Heavy Networking 458: SDN Federation – One Controller To Rule Them All? appeared first on Packet Pushers.

IBM closes $34B Red Hat deal, vaults into multi-cloud

IBM has finalized its $34 billion purchase of Red Hat and says it will use the Linux powerhouse's open-source know-how to enable larger scale customer projects and to create a web of partnerships to simplify carrying them out."A lot of our mutual clients are interested in doing a lot more," says Arvind Krishna, Senior Vice President, IBM Cloud & Cognitive Software in a blog post. "Many see this as an opportunity for us to create large industry ecosystems with other providers who are optimized on this common infrastructure. ...If Red Hat were to do this on their own, there would be a limit to how much they can scale. Together, we can put a lot more resources into optimizing other partners."To read this article in full, please click here

Linux a key player in the edge computing revolution

In the past few years, edge computing has been revolutionizing how some very familiar services are provided to individuals like you and me, as well as how services are managed within major industries. Try to get your arms around what edge computing is today, and you might just discover that your arms aren’t nearly as long or as flexible as you’d imagined. And Linux is playing a major role in this ever-expanding edge.One reason why edge computing defies easy definition is that it takes many different forms. As Jaromir Coufal, principal product manager at Red Hat, recently pointed out to me, there is no single edge. Instead, there are lots of edges – depending on what compute features are needed. He suggests that we can think of the edge as something of a continuum of capabilities with the problem being resolved determining where along that particular continuum any edge solution will rest.To read this article in full, please click here

Cisco goes deeper into photonic, optical technology with $2.6B Acacia buy

Looking to bulk-up its optical systems portfolio, Cisco says it intends to buy Acacia Communications for approximately $2.6 billion.  The deal is Cisco’s largest since it laid out $3.7B for AppDynamics in 2017.Acacia develops, manufactures and sells high-speed coherent optical interconnect products that are designed to transform networks linking data centers, cloud and service providers. Cisco is familiar with Acacia as it has been a “significant” customer of the optical firm for about five years, Cisco said.To read this article in full, please click here