With over 400 self-proclaimed IoT platforms in the market, it doesn’t surprise me that industrial enterprises are hindered trying to identify, test and select a high quality IoT platform. Platform vendors’ marketing materials contain the same messages, their RFX responses always affirm “full compliance” with all requested capabilities and they have partnerships with the same cloud vendors. With over 400 self-proclaimed IoT platforms in the market, the only way to truly know each platform is to use it.What makes a great IoT AEP?
An Application Enablement Platform (AEP) is a technology-centric offering optimized to deliver a best-of-breed, industry-agnostic, extensible middleware core for building a set of interconnected or independent IoT solutions for customers. An AEP links IoT devices and applications, delivering data to allow industrial enterprises to implement predictive maintenance, machine learning, factory automation, asset logistics, surveillance and many other applications. With IoT platform revenue slated to grow to USD 63.4 billion by 2026, IoT application enablement is one of the most highly demanded enterprise IoT platforms.To read this article in full, please click here
John Perry Barlow, who died in San Francisco last week at age 70, was an important pioneer for internet freedom. But he was much, much more than that. He was the kind of Renaissance Man that today’s internet moguls can’t even dream of emulating. And that is a huge loss for the world of technology — and the world at large.Barlow’s wide-ranging influence
You may not have heard of Barlow, but you’ve probably been influenced by him in a wide variety of surprising ways. For one thing, he was a co-founder — and at his passing, vice chairman — of the Electronic Frontier Foundation, which is considered “the leading nonprofit organization defending civil liberties in the digital world.” Back in 1990, when the EFF was formed, Barlow helped popularize the term “cyberspace.” He was a director of the WELL (Whole Earth ’Lectronic Link), the seminal online community, and he was an influential early voice at Wired magazine.To read this article in full, please click here
IDC tells us that most companies are using more than one cloud and that cloud usage isn’t just about cost savings. Three out of every four companies are using cloud to chase additional revenue in the form of new customers, risk mitigation, IoT enablement or time to market gains. Most are using multiple external cloud services.However, especially as microservices become the dominant approach to new application development because of the iteration speed improvements that it provides, it has become important to distinguish the different ways that more than one cloud can be utilized. Specifically, the differences lie in where you sit in an organization and what you are trying to optimize from that seat. Although historically we’ve used the terms interchangeably, hybrid and multi cloud are not the same.To read this article in full, please click here
Remember just a few years ago, when everyone was talking about cloud computing? While cloud was consuming all the air in the room, few people were paying attention to another technology trend—one with the potential to transform industrial enterprises. I’m talking about edge computing.The idea of placing computing resources at the network’s edge—at or near where production processes are occurring—is not a completely new idea. Industrial control has relied on distributed computers to control manufacturing machines and processes for decades. But as manufacturers come under increasing competitive pressure, the need to optimize their efficiency, productivity and quality has become a matter of survival. This imperative requirement is driving companies across the industrial spectrum to look at how pushing intelligence out to the edge can help them gain a competitive advantage.To read this article in full, please click here
Judging by all the media attention that The Internet of Things (or IoT) gets these days, you would think that the world was firmly in the grip of a physical and digital transformation. The truth, though, is that we all are still in the early days of the IoT.To read this article in full, please click here(Insider Story)
The opening to “Star Trek: The Original Series” featured Captain Kirk proclaiming that space was the “Final Frontier” and that the Enterprise was going to “boldly go where no man has gone before.”In networking, Wi-Fi is really the final frontier, as it lets us explore strange new apps and seek out new tweets regardless of where we are. Untethered from cables, we are as free to roam around as the Enterprise was in space. There should be no question that good Wi-Fi is as important to us today as dilithium crystals were to the Enterprise.But what happens when Wi-Fi isn’t available? Or just as bad, when the connectivity is almost there but not quite strong enough to be useful. I recall being in a hotel where I couldn’t connect to Wi-Fi at the desk in the room, but I could connect if I sat in the hallway by the entry door, so I wound up sitting there all night trying to get work done. It’s easy to say that Wi-Fi should be everywhere, but sometimes it’s hard to achieve that because of interference or cabling problems.To read this article in full, please click here
Today is launch day for Sylabs — a new company focused on promoting Singularity within the enterprise and high-performance computing (HPC) environments and on advancing the fields of artificial intelligence (AI), machine/deep learning, and advanced analytics.And while it's launch day for Sylabs, it's not launch day for the technology it will be promoting. Singularity has already made great strides for HPC and has given Linux itself more prominence in HPC as it has moved more deeply into the areas of scientific and enterprise computing. With its roots at Lawrence Berkeley National Laboratory (Berkeley Lab), Singularity is already providing a platform for a lot of heavy-duty scientific research and is expected to move into many other areas, such as machine learning, and may even change the way some difficult analytical problems are approached.To read this article in full, please click here
It was inevitable. Once Google published its findings for the Meltdown and Spectre vulnerabilities in CPUs, the bad guys used that as a roadmap to create their malware. And so far, researchers have found more than 130 malware samples designed to exploit Spectre and Meltdown.If there is any good news, it’s that the majority of the samples appear to be in the testing phase, according to antivirus testing firm AV-TEST, or are based on proof-of-concept software created by security researchers. Still, the number is rising fast.To read this article in full, please click here
Terahertz data links promise significant advantages over existing microwave-based wireless data transmissions, and the technology will ultimately beat out the upcoming 5G millimeter frequencies if progress continues on it, researchers say.The reason for the optimism is that terahertz is more capacious than existing radio bands. It’s also less power hungry, and new technical advances are being made in it.Also read: New wireless science promises 100-times faster Wi-Fi
The latest terahertz-level advance, announced this week by scientists at Brown University, is the ability to bounce the mega-bandwidth-carrying waves of energy around corners. Quashing that line-of-sight requirement introduces a level of robustness not seen before.To read this article in full, please click here
On the topic of measuring WAN metrics, most engineers think to look at the standard statistics of loss, latency, jitter, and reachability for determining path quality. This is good information for a routing protocol that is making decisions for packet flow at layer 3 of the OSI model. However, it is incomplete information when looking at it from the perspective of the overall user experience. In order for an SD-WAN solution to provide materially better value than a typical packet router, it must look beyond the metrics considered by the router. SD-WAN devices shouldn’t be considered routers in the conventional sense. Routers use local tables and algorithms such as Dijkstra to determine the shortest path to a destination for a packet. The term packet is important here. It is all that the router cares about. If you look up the definition of a router, it is a device that functions at layer 3 to deliver packets to their destination network. When there is a problem the router will process the topology change and compute new routing table entries that are a point in time decision of the available paths. These topology changes take time to process. This can Continue reading
Cisco Enterprise Agreements (“EAs”) are becoming an increasingly popular vehicle for purchasing and consuming software products and services from Cisco.To read this article in full, please click here(Insider Story)
Network functions virtualization (NFV) enables IT pros to modernize their networks with modular software running on standard server platforms.Over time, NFV will deliver high-performance networks with greater scalability, elasticity, and adaptability at reduced costs compared to networks built from traditional networking equipment. NFV covers a wide range of network applications, but is driven primarily by new network requirements, including video, SD-WAN, Internet of Things and 5G.To read this article in full, please click here
AMD scored a significant win in its efforts to retake ground in the data center with Dell announcing three new PowerEdge servers aimed at the usual high-performance workloads, like virtualized storage-area networks (VSAN), hybrid-cloud applications, dense virtualization, and big data analytics. The servers will run AMD's Epyc 7000 series processors.What’s interesting is that two of the three new Dell servers, the PowerEdge R6415 and R7415, are single-socket systems. Usually a single-socket server is a small tower stuck in a closet or under a desk and running as a file and print server or departmental server, not something running enterprise workloads. The R7425 is the only dual-socket server being introduced.To read this article in full, please click here
As an IT professional, you were hired for a certain, specialized job. But why can’t you seem to get it done? Maybe you’ve been busy “fighting fires.” For anyone responsible for network infrastructure, that’s a leading culprit. But there are others.On the theory that to solve a problem first you need to identify it, we’ve listed a number of obstacles that may be keeping you and your team from the mission-critical parts of your jobs. Taking note of these distractions can be a first step toward fashioning solutions that lead to better outcomes for you and your organization.Infrastructure firefighting
When things don’t go according to plan and you have to trade your strategic IT roadmap for tactical reactionary decisions - that’s infrastructure firefighting. The network may not be working as intended; capacity planning may be off mark; production issues could be causing outages, requiring in-depth explanation and research to mitigate repeat outages in the future. Outages may require special actions, as we discuss in this article. You may not have signed on to extinguish unwanted fires, but like it or not, that has become part of your job.To read this article in full, please click here
Mean Time to Repair (MTTR) is a common term in IT that represents the average time required to repair a failed component or device. In networking, MTTR is often longer than desired because there are many interdependencies, whereby an issue in one part of the network may cause a problem much farther downstream. Furthermore, a configuration change might appear to create a new issue, when in fact it just exposed something that was there all along but hidden. It takes quite a bit of forensics to get to the root cause of a network problem. In the meantime (pun intended), there is plenty of blame to go around. The Wi-Fi network seems to be at the top of the list when the accusations fly – more so than any other section of the network. Why is that?To read this article in full, please click here
Comeback kid AMD announced on its quarterly earnings call that it intends to have a silicon fix for the variant 2 of the Spectre exploit, the only one of the Meltdown and Spectre exploits it’s vulnerable to, by 2019 with its new Zen 2 core.The company also said it will ramp up GPU card production to meet the insane demand these days thanks to cryptominers, although it said the biggest challenge will be to find enough memory to make the cards.Also read: Meltdown and Spectre: How much are ARM and AMD exposed?
It's hard to believe that in 2018 we are seeing such shortages in computing hardware, but there you have it.To read this article in full, please click here
When enterprises started moving workloads and applications to the public cloud, it made sense to adapt existing networking technologies to the new domain. But while compute and storage have successfully become ‘cloud-like,’ networking hasn't.Cloud networking solutions being offered by companies including Aviatrix, Cisco, and Juniper Network are all vying to help organizations solve networking challenges when transforming their infrastructure to public cloud. But as cloud implementations become more complex, it’s becoming clear that cloud connectivity solutions based on virtualized datacenter networking technologies lack the agility and elasticity required to build and scale in the public cloud.To read this article in full, please click here
Everyone’s heard of the IoT – smart thermostats, Internet-connected refrigerators, connected lightbulbs – but there’s a subset called industrial IoT that has a much more significant day-to-day impact on businesses, safety and even lives.The term IIoT refers to the Industrial Internet of Things. In broad strokes, it’s the application of instrumentation and connected sensors and other devices to machinery and vehicles in the transport, energy and industrial sectors.What that means in practice varies widely. One IIoT system could be as simple as a connected rat trap that texts home to say that it’s been activated, while another might be as complicated as a fully automated mass production line that tracks maintenance, productivity and even ordering and shipping information across a huge, multi-layered network.To read this article in full, please click here
I gave a keynote presentation at MEF and answered two questions that I’m commonly asked:
What’s next after SD-WAN?
What’s the relationship between SD-WAN and NFV?
If you’ve read my previous blogs, you can probably guess my answer to the first question. I believe the software-defined WAN must evolve into the self-driving WAN. By augmenting automation with machine learning and AI, we can build WANs that dynamically translate business intent into action, with central orchestration working in tandem with the WAN edge. For this blog, I will focus on answering the second question.To read this article in full, please click here
If portions of enterprise data-center networks have no need to communicate directly with the internet, then why do we configure routers so every system on the network winds up with internet access by default?To read this article in full, please click here(Insider Story)