Moving large-scale enterprise operations into the cloud is not a decision to be made lightly. There are engineering and financial considerations, and the process of determining the costs pros and cons of such a move is significantly more complex than simply comparing the expense of running a workload on-premises or in a public cloud.
Still, the trend is toward businesses making the move to one degree or another, driven by the easy ability to scale up or down depending on the workload and paying only for the infrastructure resources they use, not having to put up the capital expense to …
FICO CIO on the Costs, Concerns of Cloud Transition was written by Nicole Hemsoth at The Next Platform.
Today, we’re pleased to announce that containerd (pronounced Con-Tay-Ner-D), an industry-standard runtime for building container solutions, has reached its 1.0 milestone. containerd has already been deployed in millions of systems in production today, making it the most widely adopted runtime and an essential upstream component of the Docker platform.
Built to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes, containerd ensures users have a consistent dev to ops experience. From Docker’s initial announcement last year that it was spinning out its core runtime to its donation to the CNCF in March 2017, the containerd project has experienced significant growth and progress over the past 12 months. .
Within both the Docker and Kubernetes communities, there has been a significant uptick in contributions from independents and CNCF member companies alike including Docker, Google, NTT, IBM, Microsoft, AWS, ZTE, Huawei and ZJU. Similarly, the maintainers have been working to add key functionality to containerd.The initial containerd donation provided everything users need to ensure a seamless container experience including methods for:
The company's vetting process found a flaw in the previous Kubernetes release.

It’s ironic that the one thing most programmers would really rather not have to spend time dealing with is... a computer. When you write code it’s written in your head, transferred to a screen with your fingers and then it has to be run. On. A. Computer. Ugh.
Of course, code has to be run and typed on a computer so programmers spend hours configuring and optimizing shells, window managers, editors, build systems, IDEs, compilation times and more so they can minimize the friction all those things introduce. Optimizing your editor’s macros, fonts or colors is a battle to find the most efficient path to go from idea to running code.
CC BY 2.0 image by Yutaka Tsutano
Once the developer is master of their own universe they can write code at the speed of their mind. But when it comes to putting their code into production (which necessarily requires running their programs on machines that they don’t control) things inevitably go wrong. Production machines are never the same as developer machines.
If you’re not a developer, here’s an analogy. Imagine carefully writing an essay on a subject dear to your heart and then publishing it only to be Continue reading
In order to mix EX switches and QFX switches in the same VCF, you need to enable mixed-mode. This requires all members of the VCF to reboot unfortunately:
{master:1}
imtech@sw0-24c> request virtual-chassis mode fabric mixed
fpc0:
--------------------------------------------------------------------------
Mode set to 'Fabric with mixed devices'. (Reboot required)
fpc2:
--------------------------------------------------------------------------
Mode set to 'Fabric with mixed devices'. (Reboot required)
fpc3:
--------------------------------------------------------------------------
Mode set to 'Fabric with mixed devices'. (Reboot required)
fpc1:
--------------------------------------------------------------------------
WARNING, Virtual Chassis Fabric mode enabled without a valid software license.
Please contact Juniper Networks to obtain a valid Virtual Chassis Fabric License.
Mode set to 'Fabric with mixed devices'. (Reboot required)
{master:1}
imtech@sw0-24c>
Once you’ve cabled up your QSFP ports between the EX4300 you are adding and the QFX spines, you need to do the following:
Enable the VCF port on the QFX spine:
request virtual-chassis vc-port set pic-slot 0 port 48
This is VMware’s latest move in its ongoing push to be the “glue of the hybrid cloud.”
Imagine you wanted to have an Internet connection and someone decided that you had to wait an undefined amount of time (maybe a year, maybe 10 years) to get access. Would you find it fair? I wouldn’t!
Unfortunately, this is the situation that millions of people in remote communities around the world have to accept. They will have Internet access only when operators decide to give them access which can be in a decade or more. These people won’t enjoy all the benefits we are enjoying today for a long time. And as more and more services go online – and exclusively online – they will not get the services they used to have since the services have moved to a new place where they are not allowed to be.
This is happening because operators can’t expand everywhere at the same time; they have to choose where they go this year, then next year, etc. There are some places where they will never go because those places are not going to bring them money. Among those regions that operators will never choose to go, we find, sadly, are the poorest rural areas that desperately need the economic and development opportunities Continue reading
Networking pros with these job titles typically make more than $109,000 per year.
The post Tier 1 carrier performance report: November, 2017 appeared first on Noction.
Here’s a question I got on one of my ancient blog posts:
How many OSPF process ID can be used in a single VRF instance?
Seriously? You have to ask that? OK, maybe the question isn’t as simple as it looks. It could be understood as:
Read more ...Striking acceptable training times for GPU accelerated machine learning on very large datasets has long-since been a challenge, in part because there are limited options with constrained on-board GPU memory.
For those who are working on training against massive volumes (in the many millions to billions of examples) using cloud infrastructure, the impetus is greater than ever to pare down training time given the per-hour instance costs and for cloud-based GPU acceleration on hardware with more memory (the more expensive Nvidia P100 with 16 GB memory over a more standard 8 GB memory GPU instance). Since hardware limitations are not …
Faster Machine Learning in a World with Limited Memory was written by Nicole Hemsoth at The Next Platform.
The company says the new capabilities don't compete with its service provider customers.
It has been a long time coming, but hyperconverged storage pioneer Nutanix is finally letting go of hardware, shifting from being an a server-storage hybrid appliance maker to a company that sells software that provides hyperconverged functionality on whatever hardware large enterprises typically buy.
The move away from selling appliances was something that The Next Platform has been encouraging Nutanix to do to broaden its market appeal, but until the company reached a certain level of demand from customers, Nutanix had to restrict its hardware support matrix so it could affordably put a server-storage stack in the field and not …
Disaggregated Or Hyperconverged, What Storage Will Win The Enterprise? was written by Timothy Prickett Morgan at The Next Platform.