Juniper, Cox Pump $216M Into StackPath Coffers
In addition to leading the Series B, Juniper and Cox are also StackPath customers although they use...
In addition to leading the Series B, Juniper and Cox are also StackPath customers although they use...
"Networking is the most neglected integrated thing that exists in IT right now," says SoftIron CEO...
Today's Day Two Cloud delves into how and why to build a private cloud that functions as well as a public cloud. We examine the design and operational challenges of assembling and running cloud infrastructure on premises. Our guest is Bryan Sullins, Senior Systems Engineer for a large retailer.
The post Day Two Cloud 040: Building And Operating A Private Cloud appeared first on Packet Pushers.
In the end, what enterprises really want is a way to run any application on any cloud at any time. …
Packet Puts An Edge on Equinix was written by Jeffrey Burt at The Next Platform.
Cisco leads the industry when it comes to respected and valued IT infrastructure certification paths and last month Cisco made some significant changes to the way they do certifications. In today’s episode we discuss some of these changes and what the implications are for those of us pursuing new Cisco certifications or maintaining the certifications we already hold.
Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/
The post A New Path For Certifications appeared first on Network Collective.
Ceph, the open source object storage born from a doctoral dissertation in 2005, has been aimed principally at highly scalable workloads found in HPC environments and, later, with hyperscalers who did not want to create their own storage anymore. …
Ceph Gets Fit And Finish For Enterprise Storage was written by Jeffrey Burt at The Next Platform.
Back when Cloudflare was created, over 10 years ago now, the dominant HTTP server used to power websites was Apache httpd. However, we decided to build our infrastructure using the then relatively new NGINX server.
There are many differences between the two, but crucially for us, the event loop architecture of NGINX was the key differentiator. In a nutshell, event loops work around the need to have one thread or process per connection by coalescing many of them in a single process, this reduces the need for expensive context switching from the operating system and also keeps the memory usage predictable. This is done by processing each connection until it wants to do some I/O, at that point, the said connection is queued until the I/O task is complete. During that time the event loop is available to process other in-flight connections, accept new clients, and the like. The loop uses a multiplexing system call like epoll (or kqueue) to be notified whenever an I/O task is complete among all the running connections.
In this article we will see that despite its advantages, event loop models also have their limits and falling back to good old threaded architecture is sometimes Continue reading
An empirical guide to the behavior and use of scalable persistent memory, Yang et al., FAST’20
We’ve looked at multiple papers exploring non-volatile main memory and its implications (e.g. most recently ‘Efficient lock-free durable sets‘). One thing they all had in common is an evaluation using some kind of simulation of the expected behaviour of NVDIMMs, because the real thing wasn’t yet available. But now it is! This paper examines the real-world behaviour of Intel’s Optane DIMM, and finds that not all of the assumptions baked into prior works hold. Based on these findings, the authors present four guidelines to get the best performance out of this memory today. Absolutely fascinating if you like this kind of thing!
The data we have collected demonstrate that many of the assumptions that researchers have made about how NVDIMMs would behave and perform are incorrect. The widely expressed expectation was that NVDIMMs would have behavior that was broadly similiar to DRAM-based DIMMs but with lower performance (i.e., higher latency and lower bandwidth)… We have found the actual behavior of Optane DIMMs to be more complicated and nuanced than the "slower, persistent DRAM" label would suggest.
At some point, Moore’s Law increases in performance are going to hit a wall when it comes to datacenter networks. …
Crunching Photons And Electrons Down Into Datacenter Switch ASICs was written by Timothy Prickett Morgan at The Next Platform.
A few days ago I wrote an article on configuring kustomize
transformers for use with Cluster API (CAPI), in which I explored how users could configure the kustomize
transformers—the parts of kustomize
that actually modify objects—to be a bit more CAPI-aware. By doing so, using kustomize
with CAPI manifests becomes much easier. Since that post, the CAPI team released v1alpha3. In working with v1alpha3, I realized my kustomize
transformer configurations were incorrect. In this post, I will share CAPI v1alpha3 configurations for kustomize
transformers.
In the previous post, I referenced changes to both namereference.yaml
(to configure the nameReference transformer) and commonlabels.yaml
(to configure the commonLabels transformer). CAPI v1alpha3 has changed the default way labels are used with MachineDeployments, so for v1alpha3 you may be able to get away with only changes to namereference.yaml
. (If you know you are going to want/need additional labels on your MachineDeployment, then plan on changes to commonlabels.yaml
as well.)
Here are the CAPI v1alpha3 changes needed to namereference.yaml
:
- kind: Cluster
group: cluster.x-k8s.io
version: v1alpha3
fieldSpecs:
- path: spec/clusterName
kind: MachineDeployment
- path: spec/template/spec/clusterName
kind: MachineDeployment
- kind: AWSCluster
group: infrastructure.cluster.x-k8s.io
Continue reading
The company's headquarters in Mountain View, California, are under a "shelter in place" ordinance...
2020 is predicted to be an exciting year with more organizations adopting Kubernetes than ever before. As critical workloads with sensitive data migrate to the cloud, we can expect to encounter various Advanced Persistent Threats (APT) targeting that environment.
DGA is a technique that fuels malware attacks. DGA by itself can’t harm you. But it’s a proven technique that enables modern malware to evade security products and counter-measures. Attackers use DGA so they can quickly switch the command-and-control (also called C2 or C&C) servers that they’re using for malware attacks. Security software vendors act quickly to block and take down malicious domains hard-coded in malware. So, attackers used DGA specifically to counter these actions. Now DGA has become one of the top phone-home mechanisms for malware authors to reach C2 servers. This poses a significant threat to cloud security.
Mitre defines DGA as “The use of algorithms in malware to periodically generate a large number of domain names which function as rendezvous points for malware command and control servers”. Let’s examine this definition more closely. DGA at its core generates domains by concatenating pseudo-random strings and a TLD (e.g. .com, . Continue reading
Remote worker influx stifled services; Cisco, Hitachi sliced jobs; and Red Hat, Intel bridged the...
Inside Red Hat Ansible Automation Platform, the Ansible Tower REST API is the key mechanism that helps enable automation to be integrated into processes or tools that exist in an environment. With Ansible Tower 3.6 we have brought direct integration with webhooks from GitHub and GitLab, including the enterprise on-premises versions. This means that changes in source control can trigger automation to apply changes to infrastructure configuration, deploy new services, reconfigure existing applications, and more. In this blog, I’ll run through a simple scenario and apply the new integrated webhook feature.
My environment consists of Ansible Tower (one component of Red Hat Ansible Automation Platform), GitLab CE with a project already created, and a code server running an IDE with the same git repository cloned. A single inventory exists on Ansible Tower with just one host, an instance of Windows 2019 Server running on a certified cloud. For this example, I’m going to deploy IIS on top of this Windows server and make some modifications to the html file that I’d like to serve from this site.
My playbook to deploy IIS is very simple:
---
- name: Configure IIS
hosts: windows
tasks:
- name: Install Continue reading
When we launched Cumulus in the Cloud (CitC) over two years ago, we saw it as a way for our customer base to test out Cumulus Linux in a safe sandboxed environment. Looking back, September 2017 feels like an eternity ago.
Since then, CitC has become a place where we’ve been able to roll out new functionality and solutions to customers and Cumulus-curious alike — and we’ve done some really interesting things (some of our favs include integrating it with an Openstack demo and Mesos demo). It’s pretty much become a Cumulus technology playground.
As our CitC offering has evolved, we’ve also taken stock of the requirements from our customers and realized the direction we want to take CitC. So where is it heading? We’re excited to share that with the launch of our production-ready automation solution last week, CitC will have a new user experience and user interface.
Out with the old:
In with the new:
This redesigned UI comes with some really great enhancements:
The trial marks the successful use of a transceiver that's headline data rate can be achieved over...
The job cuts include nearly 400 Cisco employees and 151 Hitachi Vantara employees.