IBM Opens 4 Cloud Data Centers, Reports Declining Revenue
Will its cloud and Watson services transform Big Blue?
Will its cloud and Watson services transform Big Blue?
Work is now set on a certification program.
Teams at Saudi Aramco using the Shaheen II at King Abdullah University of Science and Technology (KAUST) supercomputer have managed to scale ANSYS Fluent across 200,000 cores, marking top-end scaling for the commercial engineering code.
The news last year of a code scalability effort that topped out at 36,000 cores on the Blue Waters machine at the National Center for Supercomputing Applications (NCSA) was impressive. That was big news for ANSYS and NCSA, but also a major milestone for Cray. Just as Blue Waters is a Cray system, albeit one at the outer reaches of its lifespan (it was installed …
Engineering Code Scales Across 200,000 Cores on Cray Super was written by Nicole Hemsoth at The Next Platform.
Several years ago, the CEO of a Fortune 100 company remarked: “If you went to bed last night as an industrial company, you’re going to wake up this morning as a software and analytics company.”
Today, these words are more true than ever—but so is the reality that the digital transformation in business has also given rise to significant changes across the IT landscape and, in turn, significant new challenges for IT security.
As people, devices, and objects become more connected, protecting all these connections and environments has become a top priority for many IT organizations. At the same time, it’s also become one of their biggest challenges. Securing each and every interaction between users, applications, and data is no easy feat—especially when you consider that securing these interactions needs to be done across environments that are constantly changing and increasingly dynamic.
So how do you mitigate risk in a world where IT complexity and “anytime, anywhere” digital interactions are growing exponentially? For organizations that are embracing cloud and virtualized environments, three common-sense steps—enabled by a ubiquitous software layer across the application infrastructure and endpoints that exists independently of the underlying physical infrastructure—are proving to be key for providing Continue reading
The post Worth Reading: Ethernet Getting back on Moore’s Law appeared first on rule 11 reader.
Today I would like to share with you some of the integration work with Ansible 2.3 that was done in the latest oVirt 4.1 release. The Ansible integration work was quite extensive and included Ansible modules that can be utilized for automating a wide range of oVirt tasks, including tiered application deployment and virtualization infrastructure management.
While Ansible has multiple levels of integrations, I would like to focus this article on oVirt Ansible roles. As stated in the Ansible documentation: “Roles in Ansible build on the idea of include files and combine them to form clean, reusable abstractions – they allow you to focus more on the big picture and only dive into the details when needed.”
We used the above logic as a guideline for developing the oVirt Ansible roles. We will cover three of the many Ansible roles available for oVirt:
For each example, I will describe the role's purpose and how it is used.
The purpose of this role is to automatically configure and manage an oVirt datacenter. It will take a newly deployed- but not yet configured- oVirt engine (RHV-M for RHV users), hosts, and storage and Continue reading
Modern data centers employ IT automation to cut costs and inject agility
IBM is a bit of an enigma these days. It has the art – some would say black magic – of financial engineering down pat, and its system engineering is still quite good. Big Blue talks about all of the right things for modern computing platforms, although it speaks a slightly different dialect because the company still thinks that it is the one setting the pace, and therefore coining the terms, rather than chasing markets that others are blazing. And it just can’t seem to grow revenues, even after tens of billions of dollars in acquisitions and internal investments over …
The Trials And Tribulations Of IBM Systems was written by Timothy Prickett Morgan at The Next Platform.
Today marks an important milestone for the Open Container Initiative (OCI) with the release of the OCI v1.0 runtime and image specifications – a journey that Docker has been central in driving and navigating over the last two years. It has been our goal to provide low-level standards as building blocks for the community, customers and the broader industry. To understand the significance of this milestone, let’s take a look at the history of Docker’s growth and progress in developing industry-standard container technologies.
The History of Docker Runtime and Image Donations to the OCI
Docker’s image format and container runtime quickly emerged as the de facto standard following its release as an open source project in 2013. We recognized the importance of turning it over to a neutral governance body to fuel innovation and prevent fragmentation in the industry. Working together with a broad group of container technologists and industry leaders, the Open Container Project was formed to create a set of container standards and was launched under the auspices of the Linux Foundation in June 2015 at DockerCon. It became the Open Container Initiative (OCI) as the project evolved that Summer.
Docker contributed runc, a reference implementation for the Continue reading