TM Forum Tool Targets Slow Digital Adoption by Operators
The Digital Maturity Model application takes five metrics into account.
The Digital Maturity Model application takes five metrics into account.
We are still chewing through all of the announcements and talk at the GPU Technology Conference that Nvidia hosted in its San Jose stomping grounds last week, and as such we are thinking about the much bigger role that graphics processors are playing in datacenter compute – a realm that has seen five decades of dominance by central processors of one form or another.
That is how CPUs got their name, after all. And perhaps this is a good time to remind everyone that systems used to be a collection of different kinds of compute, and that is why the …
The Embiggening Bite That GPUs Take Out Of Datacenter Compute was written by Timothy Prickett Morgan at The Next Platform.
The post Worth Reading: A simple start to project management appeared first on rule 11 reader.
In the previous two posts we discussed gathering metrics for long term trend analysis and then combining it with event-based alerts for actionable results. In order to combine these two elements, we need strong network monitoring tooling that allows us to overlay these activities into an effective solution.
The legacy approach to monitoring is to deploy a monitoring server that periodically polls your network devices via Simple Network Management Protocol. SNMP is a very old protocol, originally developed in 1988. While some things do get better with age, computer protocols are rarely one of them. SNMP has been showing its age in many ways.
Inflexibility
SNMP uses data structures called MIBs to exchange information. These MIBs are often proprietary, and difficult to modify and extend to cover new and interesting metrics.
Polling vs event driven
Polling doesn’t offer enough granularity to catch all events. For instance, even if you check disk utilization once every five minutes, you may go over threshold and back in between intervals and never know.
An inefficient protocol
SNMP’s polling design is a “call and response” protocol, this means the monitoring server will Continue reading
Analytics is an essential element of the transformation to SDN.
The Docker Certification Program provides a way for technology partners to validate and certify their software or plugin as a container for use on the Docker Enterprise Edition platform. Since the initial launch of the program in March, more Containers and Plugins have been certified and available for download.
Certified Containers and Plugins are technologies that are built with best practices as Docker containers, tested and validated against the Docker Enterprise Edition platform and APIs, pass security requirements, reviewed by Docker partner engineering and cooperatively supported by both Docker and the partner. Docker Enterprise Edition and Certified Technology provide assurance and support to businesses for their critical application infrastructure.
Check out the latest Docker Certified technologies to the Docker Store:
Its meant to be funny but its not.
The post RFC 1438 – Internet Engineering Task Force Statements Of Boredom (SOBs) appeared first on EtherealMind.
With machine learning, big data, cloud, and NFV initiatives invading the data center, there are implications for data center networking performance.
Reimagining the edge While the importance of the cloud is obvious to anyone, the increasing importance of the edge is often overlooked. As digitization and the Internet of Things are leading to an exponential growth in the number of devices, the amount of data that is being generated by sensors in devices such as self-driving-cars, mobile endpoints... Read more →
MPLS won’t be a factor in Comcast’s SD-WAN play.
ADVA is talking to other telcos about using its Ensemble's NFV tech in their white box uCPEs.
For a mature company that kickstarted supercomputing as we know it, Cray has done a rather impressive job of reinventing itself over the years.
From its original vector machines, to HPC clusters with proprietary interconnects and custom software stacks, to graph analytics appliances engineered in-house, and now to machine learning, the company tends not to let trends in computing slip by without a new machine.
However, all of this engineering and tuning comes at a cost—something that, arguably, has kept Cray at bay when it comes to reaching the new markets that sprung up in the “big data” days of …
Cray Supercomputing as a Service Becomes a Reality was written by Nicole Hemsoth at The Next Platform.