Artificial intelligence (AI) and machine learning (ML) can be invaluable tools to spur innovation, but they have different management requirements than typical enterprise IT applications that run at moderate CPU and memory utilization rates. Because AI and ML tend to run intense calculations at very high utilization rates, power and cooling costs can consume a higher proportion of the budget than an IT group might expect.It's not a new problem, but the impact is intensifying.As more CPU-heavy applications such as data warehousing and business intelligence became prevalent, IT was often oblivious to the electric bill it was racking up – particularly since the bill usually goes to the ops department, not IT.To read this article in full, please click here
One convenient way to list details about user logins on a Linux system is to use the lslogins command. You'll get a very useful and nicely formatted display that includes quite a few important details.On my system and likely most others, user accounts will start with UID 1000. To list just these accounts rather than include all of the service accounts like daemon, mail and syslog, add the -u option as shown in the example below.$ sudo lslogins -u
UID USER PROC PWD-LOCK PWD-DENY LAST-LOGIN GECOS 0 root 151 0 0 root 1000 shs 68 0 0 12:35 Sandra H-S
1001 nemo 0 0 0 2021-Jan05 Nemo Demo,,,
1002 dbell 0 0 1 Dory Bell
1003 shark 2 0 0 7:15 Sharon Hark
1004 tadpole 0 0 0 2020-Dec05 Ted Pole
1005 eel 0 0 0 2021-Jan11 Ellen McDay
1006 bugfarm 0 0 0 2021-Jan01 Bug Farm
1008 dorothy 0 0 1 Dorothy Reuben
1012 jadep 0 0 1 2021-Jan04 Jade Jones
1013 myself 0 0 0 2021-Jan12 My Self
1014 marym 0 0 0 2020-Mar20 Mary McShea
1017 gijoe 0 0 0 GI Joe
65534 nobody 0 0 1 nobody
What the lslogins command does is grab Continue reading
Cisco has added support for traditional network environments to the company’s recently available data center-management console.Introduced in October, Cisco’s Nexus Dashboard melds a number of Cisco’s on-premises, cloud and hybrid fabric-management tools into a single interface to administer application lifecycles from provisioning to maintenance and optimization.[Get regularly scheduled insights by signing up for Network World newsletters.]
The idea is that the dashboard provides a central platform for data center-operation applications to simplify the operation and management of the applications while reducing the infrastructure overhead to run them, according to Cisco. To read this article in full, please click here
Pat Gelsinger’s return to Intel after a 12-year absence has been greeted positively, with the stock jumping 8% on the news Wednesday, analysts lauding it, and apparently even Intel staff approving, indicating he remained popular there despite leaving the firm in 2009.Replacing outgoing CEO Bob Swan with VMware CEO Gelsinger, isn’t a sign of failure on the part of Swan, who took over in early 2018. Intel is expected to meet or exceed Q1 revenue and income projections when it reports earnings Jan. 21. The fact that Swan is being given time to clean out his desk—he is staying on until mid-February—says that this is a civil parting, unlike that of his predecessor, Brian Krzanich, whom they couldn’t get out the idoor fast enough.To read this article in full, please click here
Intel said it will bring on current VMware leader Pat Gelsinger as its new chief executive officer, effective Feb. 15, 2021. The 40-year technology industry vet replaces Intel’s Bob Swan, who will remain CEO until that date.For VMware, the company said it was initiating a global executive search process to name a permanent chief and that Zane Rowe, current VMware CFO will become interim CEO.[Get regularly scheduled insights by signing up for Network World newsletters.]
“Pat led the company in expanding our core virtualization footprint and broadening our capabilities to cloud, networking, 5G/edge and security, while almost tripling revenue to nearly $12 billion,” Rowe said in a statement. “VMware remains focused on helping customers optimize their digital infrastructure—from app modernization and multi-cloud to networking, security and digital workspaces. We look forward to continued growth and innovation across our technology offerings.”To read this article in full, please click here
IT outsourcing giant Atos has put in a bid to acquire DXC Technology, which would give the French IT giant a big foot in the door to the U.S. market.The rumor first ran last week on Reuters, which put the purchase price at $10.1 billion. Atos issued a rather short statement confirming the talks, but did not confirm the rumored price. It said there was no certainty of an outcome and further announcements would be made “when appropriate.”For its part, DXC said it had indeed received an offer from Atos, again without mentioning the price, and said it would be “evaluating the proposal.”To read this article in full, please click here
(A hurricane devastated an island that held two data centers controlling mission-critical systems for an American biotech company. They flew a backup expert with four decades of experience to the island on a corporate jet to save the day. This is the story of the challenges he faced and how he overcame them. He spoke on the condition of anonymity, so we call him Ron, the island Atlantis, his employer Initech, and we don’t name the vendors and service providers involved.)Initech had two data centers on Atlantis with a combined 400TB of data running on approximately 200 virtual and physical machines. The backup system was based on a leading traditional backup software vendor, and it backed up to a target deduplication disk system. Each data center backed up to its own local deduplication system and then replicated its backups to the disk system in the other data center. This meant that each datacenter had an entire copy of all Initech’s backups on Atlantis, so even if one data center were destroyed the company would still have all its data.To read this article in full, please click here
Lenovo Data Center Group has released new storage and data-management tools designed to boost performance and improve monitoring and analytic capabilities across enterprise systems that span the edge, data center and cloud.The enhancements include a new all-flash storage array with end-to-end NVMe support, an updated cloud-based management platform, and a new fibre channel switch.
READ MORE: HP Enterprise expands GreenLake to cover HPC systems
Lenovo ThinkSystem DM5100F
The new Lenovo ThinkSystem DM5100F is high-performance, low-latency, all-NVMe storage at an affordable price point, designed to enhance analytics and AI deployments while accelerating applications' access to data. It's capable of delivering up to 45% improved performance compared to prior models, according to Lenovo.To read this article in full, please click here
When it bought Cray back in May 2019, HP Enterprise hinted at offering HPC systems as a service. Now it is delivering on that with the introduction of HPE GreenLake cloud services for HPC.HPE has made a lot of headway with its GreenLake program, the pay-per-use model created in response to the popularity of cloud service providers. It lets customers pay as if they are buying a cloud service, but it’s provisioned using infrastructure deployed at customer sites or in colocation facilities. Up to now it’s been used in standard IT applications, like app and Web serving or databases.To read this article in full, please click here
Amazon Web Services has rolled out a new, more native way to connect SD-WAN infrastructures with AWS resources.Introduced at its re:Invent virtual event, AWS Transit Gateway Connect promises a simpler, faster, and more secure way for customers to tie cloud-based resources back to data centers, remote office workers or other distributed access points as needed.Thirteen networking vendors including Cisco, Aruba, Arista, Alkira, Fortinet, Palo Alto, and Versa announced support for the technology, which offers higher throughput and increased security for distributed cloud workloads.To read this article in full, please click here
Hyperconverged infrastructure (HCI) has made substantial inroads in enterprise environments, and vendors have responded with new use cases and purchasing scenarios, including an emerging deployment option: HCI as a service.Conventional HCI combines servers, storage and network resources into a single box, providing adopters with a gateway to simplified, centralized data-center management. HCI as a service (HCIaaS) ups the ante by enabling data-center operators to adopt HCI in a manner that promises to reduce both operational and financial overhead.Several HCI vendors, including VMware, Nutanix, Dell, and HPE, offer a managed service option, says Naveen Chhabra, a senior analyst at IT research firm Forrester. "It basically turns the capital expenditure and one-time investment into an [operating expense]," Chhabra says. "In most cases, the vendor will also manage the HCI's day-to-day operations."To read this article in full, please click here
Aruba has taken the wraps off new orchestration software and switches that target users looking to build and support distributed data-centers.Aruba Fabric Composer software simplifies leaf-and-spine network provisioning across the company’s CX switches and automates operations across a wide variety of virtualized, hyper-converged, and HPE compute and storage environments.The Fabric Composer runs as runs as a virtual machine and eliminates the need for networking teams to manually configure CX switches. It offers workflow automation and a view of workflows supported by networking fabrics, switches, hosts and other resources, said Steve Brar, senior director of product marketing for Aruba.To read this article in full, please click here
AWS has turned up the drumbeat to move workloads off of the mainframe and into its cloud. At its weeks-long re:Invent virtual event, Amazon Web Services said it would soon expand its AWS Competency Program to include even more services to migrate mainframe workloads to the cloud. The services are an expansion of mainframe migration services AWS has had on its menu for the past few years.[Get regularly scheduled insights by signing up for Network World newsletters.]
AWS says its Competency Program is designed to identify, validate, and promote AWS partners with demonstrated technical expertise in a given area. In this case users looking to migrate will have access to products and services from core AWS partners, the company wrote in a blog about the new service.To read this article in full, please click here
SUSE’s acquisition of Rancher Labs puts the Germany-based open-source software company in a much stronger position to offer flexible, edge-based services to its customers, according to an analyst at IDC.The deal—which was originally announced this summer—essentially makes Rancher Labs into SUSE’s containerization “innovation center,” said IDC research director Gary Chen. Any customer working on digital transformation and rapid development is likely to appreciate the improved support for containerization—letting workloads function on whatever hardware is handy, and communicate across different arrangements of edge, cloud and local computing.Terms of the deal were not publicly disclosed, but a CNBC report published after the initial announcement quoted sources familiar with the deal as saying that SUSE is paying between $600 million and $700 million.To read this article in full, please click here
Schneider Electric is better known in its native Europe than in the U.S., but it's looking to change that with a $40 million project to upgrade its U.S. manufacturing resources. The company, which specializes in energy management and automation technologies for data centers, shared its plans for U.S. expansion at its Innovation Summit North America 2020, held virtually this year.Schneider also unveiled a new set of ruggedized data-center enclosures targeting the Industrial Internet of Things (IIoT). Designed for indoor industrial environments, the EcoStruxure Micro Data Center R-Series offers a fast and simple way to deploy and manage edge computing infrastructure in a place like a factory floor.To read this article in full, please click here
A data center is a physical facility that enterprises use to house their business-critical applications and information. As they evolve from centralized on-premises facilities to edge deployments to public cloud services, it’s important to think long-term about how to maintain their reliability and security.What is a data center?
Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements. These can be broken down into three categories:
Compute: The memory and processing power to run the applications, generally provided by high-end servers
Storage: Important enterprise data is generally housed in a data center, on media ranging from tape to solid-state drives, with multiple backups
Networking: Interconnections between data center components and to the outside world, including routers, switches, application-delivery controllers, and more
These are the components that IT needs to store and manage the most critical resources that are vital to the continuous operations of an organization. Because of this, the reliability, efficiency, security and constant evolution of data centers are typically a top priority. Both software and hardware security measures are a must.To read this article in full, please click here
A data center is the physical facility providing the compute power to run applications, the storage capabilities to process data, and the networking to connect employees with the resources needed to do their jobs.Experts have been predicting that the on-premises data center will be replaced by cloud-based alternatives, but many organizations have concluded that they will always have applications that need to live on-premises. Rather than dying, the data center is evolving.It is becoming more distributed, with edge data centers springing up to process IoT data. It is being modernized to operate more efficiently through technologies like virtualization and containers. It is adding cloud-like features such as self-service. And the on-prem data center is integrating with cloud resources in a hybrid model.To read this article in full, please click here
Least privilege—the idea that each person in your organization should have the least number of privileges they need in order to accomplish a given task—is an important security concept that needs to be implemented in your backup system.The challenge here is that network, system, and backup admins all wield an incredible amount of power. If one of them makes a mistake, or worse, intentionally tries to do the company harm, limiting the amount of power they have reduces the amount of damage they can inflict.For example, you might give one network administrator the ability to monitor networks, and another one the ability to create and/or reconfigure networks. Security admins might be responsible for creating and maintaining network-administration users without getting any of those privileges themselves.To read this article in full, please click here
The edge is being sold to enterprise customers from just about every part of the technology industry, and there’s not always a bright dividing line between “public” options – edge computing sold as a service, with a vendor handling operational data directly – and “private” ones, where a company implements an edge architecture by itself.There are advantages and challenges to either option, and which is the right edge-computing choice for any particular organization depends on their individual needs, budgets and staffing, among other factors. Here are some considerations.To read this article in full, please click here
The community around the open-sourced Software for Open Networking in the Cloud (SONiC) NOS got a little stronger as Apstra says its intent-based networking software is now more ready for enterprise prime-time than implementations from Cisco and Arista.The Linux-based NOS, developed and open sourced by Microsoft in 2017, decouples network software from the underlying hardware and lets it run on switches and ASICs from multiple vendors while supporting a full suite of network features such as border gateway protocol (BGP), remote direct memory access (RDMA), QoS, and other Ethernet/IP technologies.To read this article in full, please click here