Whether being built for capacity or capability, the conventional wisdom about memory provisioning on the world’s fastest systems is changing quickly. The rise of 3D memory has thrown a curveball into the field as HPC centers consider the specific tradeoffs between traditional, stacked, and hybrid combinations of both on next-generation supercomputers. In short, allocating memory on these machines is always tricky—with a new entrant like stacked memory into the design process, it is useful to gauge where 3D devices might fit.
While stacked memory is getting a great deal of airplay, for some HPC application areas, it might fall just …
3D Memory Sparks New Thinking in HPC System Design was written by Nicole Hemsoth at The Next Platform.
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
VMware’s NSX virtual network technology can help organizations achieve a greater level of network security, but how you approach deployment will vary depending on whether you are working with new applications (greenfield) or are moving applications from existing infrastructure to NSX (brownfield).
NSX’s micro-segmentation capabilities essentially allow placement of virtual firewalls around every server to control East-West traffic, thereby limiting lateral exploration of networks by hackers, and making it significantly easier to protect applications and data. It can enable a level of security that previously would have been prohibitively expensive and complicated using traditional hardware.
To read this article in full or to leave a comment, please click here
The post Worth Reading: A digital Geneva treaty appeared first on 'net work.
Affirmed's IoT platform supports network slicing.
Cardwell, Neal, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, and Van Jacobson. “BBR: Congestion-Based Congestion Control.” Queue 14, no. 5 (October 2016): 50:20–50:53. doi:10.1145/3012426.3022184.
Article available here
Slides available here
In the “old days,” packet loss was a major problems; so much so that just about every routing protocol has a number of different mechanisms to ensure the reliable delivery of packets. For instance, in IS-IS, we have—
It’s not that early protocol designers were dumb, it’s that packet loss was really this much of a problem. Congestion in the more recent sense was not even something you would not have even thought of; memory was expensive, so buffers were necessarily small, and hence a packet would obviously be dropped before it was buffered for any amount of time. TCP’s retransmission mechanism, the parameters around the window size, and the slow start mechanism, were designed to react to packet drops. Further, it might be obvious to think that any particular stream might provide Continue reading
Selecting one vendor to oversee VNFs can help de-risk network virtualization.
Demos of network slicing show operators how to separate critical traffic from cat videos.
In November, Radisys contributed its EPC to the CORD project to create a vEPC.