“This is one of the first transactions driven by climate change,” said industry analyst Roger...
Very few organizations use IT equipment supplied by a single vendor. Where heterogeneous IT environments exist, interoperability is key to achieving maximum value from existing investments. Open networking is the most cost effective way to ensure interoperability between devices on a network.
Unless your organization was formed very recently, chances are that your organization’s IT has evolved over time. Even small hardware upgrades are disruptive to an organization’s operations, making network-wide “lift and shift” upgrades nearly unheard of.
While loyalty to a single vendor can persist through regular organic growth and upgrade cycles, organizations regularly undergo mergers and acquisitions (M&As). M&As almost always introduce some level of heterogeneity into a network, meaning that any organization of modest size is almost guaranteed to have to integrate IT from multiple vendors.
While every new type of device from every different vendor imposes operational management overhead, the impact of heterogeneous IT isn’t universal across device types. The level of automation within an organization for different device classes, as well as the ubiquity and ease of use of management abstraction layers, both play a role in determining the impact of heterogeneity.
Consider, for a moment, the average x86 server. Each Continue reading
We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
What is your job?
Senior Director of Product Marketing.
How long have you worked at Docker?
2 ½ years.
Is your current role one that you always intended on your career path?
Nope! I studied engineering and started in a technical role at a semiconductor company. I realized there that I really enjoyed helping others understand how technology works, and that led me to Product Marketing! What I love about the role is that it’s extremely cross-functional. You work closely with engineering, product management, sales and marketing, and it requires both left brain and right brain skills. My technical background helps me to understand our products, while my creative side helps me communicate our products’ core value propositions.
What is your advice for someone entering the field?
It’s always good to be self-aware. Know your strengths and weaknesses, and look Continue reading
In this blog explore how the Secure Access Service Edge (SASE) converges enterprise security and...
For more than two decades, DirectData Networks has focused on HPC storage, supplying large systems to enterprises and research institutions wrestling with complex and data-laden workloads and taking on such challenges as acquiring the Lustre File System from Intel in 2018. …
DDN Uses Acquisitions to Grow In The Enterprise was written by Jeffrey Burt at The Next Platform.
This is a guest post by Dimitris Koutsourelis and Alexis Dimitriadis, working for the Security Team at Workable, a company that makes software to help companies find and hire great people.
This post is about our introductive journey to the infrastructure-as-code practice; managing Cloudflare configuration in a declarative and version-controlled way. We’d like to share the experience we’ve gained during this process; our pain points, limitations we faced, different approaches we took and provide parts of our solution and experimentations.
Terraform is a great tool that fulfills our requirements, and fortunately, Cloudflare maintains its own provider that allows us to manage its service configuration hasslefree.
On top of that, Terragrunt, is a thin wrapper that provides extra commands and functionality for keeping Terraform configurations DRY, and managing remote state.
The combination of both leads to a more modular and re-usable structure for Cloudflare resources (configuration), by utilizing terraform and terragrunt modules.
We’ve chosen to use the latest version of both tools (Terraform-v0.12 & Terragrunt-v0.19 respectively) and constantly upgrade to take advantage of the valuable new features and functionality, which at this point in time, remove important limitations.
Our set up includes Continue reading
docker run --name influxdb -p 9999:9999 quay.io/influxdb/influxdb:2.0.0-alphaPrometheus exporter describes an application that runs on the sFlow-RT analytics platform that converts real-time streaming telemetry from industry standard sFlow agents. Host, Docker, Swarm and Kubernetes monitoring describes how to deploy agents on popular container orchestration platforms.
The 5th Pakistan School on Internet Governance (pkSIG 2019) was successfully held last month in Quetta, Pakistan. This represents a significant achievement for the Internet Society Pakistan Islamabad Chapter as it played an instrumental role in bringing the first-ever Internet Governance event to the provincial capital of Balochistan.
For those who may not know, Balochistan has the largest land area among the four provinces of Pakistan, yet it is the least populated and least developed. Only 27% of its population lives in urban areas and Internet penetration is low. Finding adequate sponsors, and more importantly, diversity among the students to participate was a critical concern. But, pkSIG 2019 in Quetta proved to be one of the best editions of this school.
Over 60 people (one-third of them female) registered for the event, including students, professionals, startup founders, speakers, and some guests who showed keen interest in the program. Following a four-week long process of registration and shortlisting, 35 students were selected for pkSIG 2019 and five were awarded fellowships. Since all the sessions were livestreamed, a sizeable audience participated online as well. (The sessions and presentations are available online.)
“It’s our fifth consecutive year conducting pkSIG – Continue reading
A while ago I had an interesting discussion with someone running VMware NSX on top of VXLAN+EVPN fabric - a pretty common scenario considering:
His fabric was running well… apart from the weird times when someone started tons of new VMs.
Read more ...Applying deep learning to Airbnb search Haldar et al., KDD’19
Last time out we looked at Booking.com’s lessons learned from introducing machine learning to their product stack. Today’s paper takes a look at what happened in Airbnb when they moved from standard machine learning approaches to deep learning. It’s written in a very approachable style and packed with great insights. I hope you enjoy it as much as I did!
Ours is a story of the elements we found useful in applying neural networks to a real life product. Deep learning was steep learning for us. To other teams embarking on similar journeys, we hope an account of our struggles and triumphs will provide some useful pointers.
The core application of machine learning discussed in this paper is the model which orders available listings according to a guest’s likelihood of booking. This is one of a whole ecosystem of models which contribute towards search rankings when a user searches on Airbnb. New models are tested online through an A/B testing framework to compare their performance to previous generations.
The very first version of search ranking at Airbnb was a hand-written Continue reading
Red Hat Ansible Tower offers value by allowing automation to scale in a checked manner - users can run playbooks for only the processes and targets they need access to, and no further.
Not only does Ansible Tower provide automation at scale, but it also integrates with several external platforms. In many cases, this means that users can use the interface they are accustomed to while launching Ansible Tower templates in the background.
One of the most ubiquitous self service platforms in use today is ServiceNow, and many of the enterprise conversations had with Ansible Tower customers focus on ServiceNow integration. With this in mind, this blog entry walks through the steps to set up your ServiceNow instance to make outbound RESTful API calls into Ansible Tower, using OAuth2 authentication.
This is part 3 in a multi-part series, feel free to refer to part 1 and part 2 for more context.
The following software versions are used:
If you sign up for a ServiceNow Developer account, ServiceNow offers a free instance that can be used for replicating and testing this functionality. Your ServiceNow instance needs to be able Continue reading
The offering features a hybrid and multi-cloud file backup tool that enables long-term retention...
Docker Enterprise was built to be secure by default. When you build a secure by default platform, you need to consider security validation and governmental use. Docker Enterprise has become the first container platform to complete the Security Technical Implementation Guides (STIG) certification process. Thanks to Defense Information Systems Agency (DISA) for its support and sponsorship. Being the first container platform to complete the STIG process through DISA means a great deal to the entire Docker team.
The STIG took months of work around writing and validating the controls. What does it really mean? Having a STIG allows government agencies to ensure they are running Docker Enterprise in the most secure manner. The STIG also provides validation for the private sector. One of the great concepts with any compliance framework, like STIGs, is the idea of inherited controls. Adopting a STIG recommendation helps improve an organization’s security posture. Here is a great blurb from DISA’ site:
The Security Technical Implementation Guides (STIGs) are the configuration standards for DOD IA and IA-enabled devices/systems. Since 1998, DISA has played a critical role enhancing the security posture of DoD’s security systems by providing the Security Technical Implementation Guides (STIGs). The STIGs Continue reading
“I wouldn’t put another dime into the network," said industry analyst Earl Lum. "They’ve...
On today's Heavy Networking our guest walks us through a project that brought both ACI and NSX into the same data center at a very large company. We discuss the drivers for ACI in the underlay and NSX in the overlay, the learning curves on each product, challenges and successes, and more. Our guest is Derek Wilson, a Principal Network Consultant.
The post Heavy Networking 476: Running ACI And NSX In The Same Data Center appeared first on Packet Pushers.
Arm CEO Simon Segars said that the company is adding a new feature to its processors that will...