Are microservices about to revolutionize the Internet of Things?

Along with the rise of cloud computing, Agile, and DevOps, the increasing use of microservices has profoundly affected how enterprises develop software. Now, at least one Silicon Valley startup hopes the combination of microservices and edge computing is going to drive a similar re-think of the Internet of Things (IoT) and create a whole new software ecosystem.Frankly, that seems like a stretch to me, but you can’t argue with the importance of microservices to modern software development. To learn more, I traded emails with Said Ouissal, founder and CEO of ZEDEDA, which is all about “deploying and running real-time edge apps at hyperscale” using IoT devices.To read this article in full, please click here

What is data deduplication, and how is it implemented?

Deduplication is arguably the biggest advancement in backup technology in the last two decades.  It is single-handedly responsible for enabling the shift from tape to disk for the bulk of backup data, and its popularity only increases with each passing day.  Understanding the different kinds of deduplication, also known as dedupe, is important for any person looking at backup technology.What is data deduplication? Dedupe is the identification and elimination of duplicate blocks within a dataset. It is similar to compression, which only identifies redundant blocks in a single file. Deduplication can find redundant blocks of data between files from different directories, different data types, even different servers in different locations.To read this article in full, please click here

Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples

Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples Athalye et al., ICML’18

There has been a lot of back and forth in the research community on adversarial attacks and defences in machine learning. Today’s paper examines a number of recently proposed defences and shows that most of them rely on forms of gradient masking. The authors develop attack techniques to overcome such defences, and 9 analyse defences from ICLR 2018 claiming to protect against white-box attacks. 7 of these turn out to rely on obfuscated gradients, and 6 of these fall to the new attacks (and the other one partially succumbs). Athalye et al. won a best paper award at ICML’18 for this work.

One of the great things about work on adversarial attacks and defences, as we’ve looked at before, is that they illuminate the strengths and weaknesses of current technology. Depending on the threat model you choose, for my own part I’m currently of the opinion that we’re unlikely to find a robust adversarial defence without a more radical re-think of how we’re doing image classification. If we’re talking about the task of ‘find an image that doesn’t fool a human, but Continue reading

50 Shades of Open Source: How to Determine What’s Suitable for Enterprise White Box Networking

To date, the open source community has been quite successful in terms of coming up with scalable and reliable implementations for enterprise servers, databases and more. Yet many enterprises remain skittish about implementing open source software, probably no more so than in the networking space.

Part of the reason is that there are so many different implementations of open source software, many of them backed by different entities with different agendas. Having many minds contribute to an open source project can be a good thing – until it comes time to make a decision about something and stick with it, so you can get a working product out the door. Enterprises need practical implementations that they can count on day in and day out to get a job done.

Defining the shades of open source
Open source essentially comes in different shades that are not all created equal. Understanding them will help you determine whether the open source implementation you have in mind has the kind of reliability and stability you need in any enterprise IT tool or application.

At a base level is the “pure” open source community, where like-minded people contribute their time and knowledge to a project. Continue reading

Why DHCP’s days might be numbered

Dynamic Host Configuration Protocol (DHCP) is the standard way network administrators assign IP addresses in IPv4 networks, but eventually organizations will have to pick between two protocols created specifically for IPv6 as the use of this newer IP protocol grows.DHCP, which dates back to 1993, is an automated way to assign IPv4 addresses, but when IPv6 was designed, it was provided with an auto-configuration feature dubbed SLAAC that could eventually make DHCP irrelevant. To complicate matters, a new DHCP – DHCPv6 – that performs the same function as SLAAC was independently created for IPv6.[ Now read 20 hot jobs ambitious IT pros should shoot for. ] Deciding between SLAAC and DHCPv6 isn’t something admins will have to do anytime soon, since the uptake of IPv6 has been slow, but it is on the horizon.To read this article in full, please click here

What is DHCP, and why might its days may be numbered as IPv6 grows?

Dynamic Host Configuration Protocol (DHCP) is the standard way network administrators assign IP addresses in IPv4 networks, but eventually organizations will have to pick between two protocols created specifically for IPv6 as the use of this newer IP protocol grows.DHCP, which dates back to 1993, is an automated way to assign IPv4 addresses, but when IPv6 was designed, it was provided with an auto-configuration feature dubbed SLAAC that could eventually make DHCP irrelevant. To complicate matters, a new DHCP – DHCPv6 – that performs the same function as SLAAC was independently created for IPv6. [ Now read 20 hot jobs ambitious IT pros should shoot for. ] Deciding between SLAAC and DHCPv6 isn’t something admins will have to do anytime soon, since the uptake of IPv6 has been slow, but it is on the horizon.To read this article in full, please click here

What is DHCP, and why might its days may be numbered as IPv6 grows?

Dynamic Host Configuration Protocol (DHCP) is the standard way network administrators assign IP addresses in IPv4 networks, but eventually organizations will have to pick between two protocols created specifically for IPv6 as the use of this newer IP protocol grows.DHCP, which dates back to 1993, is an automated way to assign IPv4 addresses, but when IPv6 was designed, it was provided with an auto-configuration feature dubbed SLAAC that could eventually make DHCP irrelevant. To complicate matters, a new DHCP – DHCPv6 – that performs the same function as SLAAC was independently created for IPv6. [ Now read 20 hot jobs ambitious IT pros should shoot for. ] Deciding between SLAAC and DHCPv6 isn’t something admins will have to do anytime soon, since the uptake of IPv6 has been slow, but it is on the horizon.To read this article in full, please click here

The Total Economic Impact of Red Hat Ansible Tower

RH-Ansible-Tower-Header

The Total Economic Impact of Red Hat Ansible Tower is a Red Hat commissioned Forrester Consulting study published in June 2018. This study demonstrates the cost savings and business benefits enabled by Ansible. Let’s dive into the what Ansible Tower enables, the efficiencies gained, the acceleration of revenue recognition, and other tangible benefits.

Faster Revenue Recognition

Revenue recognition is a critical aspect of business operations. Quickening the pace of revenue recognition is something every organization has their eye on. Forrester’s TEI of Ansible Tower observed a company cutting delivery lead times by 66%. Imagine the pace of feature deployment an organization experiences when cutting lead times from days to hours!

System reconfiguration times fell as well. Automating changes due to new bugs or policy changes across systems helps mitigate the costly impact of reconfiguration. This company found that the total time savings of being able to reconfigure a fleet of systems through Ansible automation reduced staff hours by 94% for this type of work.

The TEI also measured the security and compliance gains of Ansible Tower. Ansible Tower reduced staff hours spent patching systems by 80%. This also meant that patching systems could occur more often. This helped reduce the Continue reading

Proactive Ops for Container Orchestration Environments: Monitoring and Logging Strategies with Docker Enterprise

Over the last decade, the popularity of microservices and highly-scalable systems has increased, leading to an overall increase in the complexity of applications that are now distributed heavily across the network with many moving pieces and potential failure modes.

This architectural evolution has changed the monitoring requirements and led to a need for scalable and insightful tooling and practices to enable us to better identify, debug and resolve issues in our systems before they impact the business and our end users (internal and/or external).

I recently gave a talk at DockerCon SF 18 discussing functionality in Docker Enterprise that enables operators to more easily monitor their container platform environment, along with some key metrics and best practices to triage and remediate issues before they cause downtime.

You can watch the full talk here:

 

Monitoring Methodologies

One of the most well-known early monitoring techniques was the USE method from Brendan Gregg at Netflix. USE specified that for every resource we should be monitoring utilization (time spent servicing work), saturation (the degree to which a resource had work it couldn’t service) and errors (number of error events). This model worked well for more hardware / node centric metrics but network-based Continue reading

Intel continues to optimize its products around AI

Normally, this is the time of year when Intel would hold its Intel Developer Forum conference, which would be replete with new product announcements. But with the demise of the show last year, the company instead held an all-day event that it live-streamed over the web.The company’s Data Centric Innovation Summit was the backdrop for a series of processor and memory announcements aimed at the data center and artificial intelligence, in particular. Even though Intel is without a leader, it still has considerable momentum. Navin Shenoy, executive vice president and general manager of the Data Center Group, did the heavy lifting.News about Cascade Lake, the rebranded Xeon server chip First is news around the Xeon Scalable processor, the rebranded Xeon server chip. The next-generation chip, codenamed “Cascade Lake,” will feature a memory controller for Intel’s new Intel Optane DC persistent memory and an embedded AI accelerator that the company claims will speed up deep learning inference workloads by eleven-fold compared with current-generation Intel Xeon Scalable processors.To read this article in full, please click here