Lenovo is taking on Dell EMC and HPE with its biggest portfolio refresh since it acquired IBM's x86 server business three years ago, offering a lineup of servers, switches, SAN arrays and converged systems intended to show that it's a serious contender in the data center and software-defined infrastructure market.The product launch, staged in New York yesterday, was the first major event for Lenovo's Data Center Group, in business since April. Lenovo wants to be a global player not only for the enterprise data center but also in hyperscale computing.Lenovo is tied for third in server market share with Cisco and IBM, well behind HPE and Dell EMC, according to IDC, and has a particularly steep uphill battle ahead in North America.To read this article in full or to leave a comment, please click here
It's not just the folks at AMD who hope that that the company's Epyc server processor, officially launched Tuesday, will break Intel's stranglehold on the data-center chip market.Enterprise users, web hosting companies and hyperscale cloud providers all want competition and choice in server chips to curb costs and fuel innovation."OEMs have been looking for an alternative to Intel for a long time, and with Intel having 98 percent market share I can say that there's absolutely a need, from the OEM point of view and the channel point of view," said Patrick Moorhead, principal at Moor Insights & Strategy.Judging from specs, performance benchmarks and memory features as well as the supporting voices from software and hardware makers in the data center ecosystem, Epyc has the best shot of any chip to hit the market in years at putting a crack in Intel's dominance.To read this article in full or to leave a comment, please click here
When you run a command in the background on a Linux system and then log out, the process you were running will stop abruptly. If you elect to run the command with a no-hangup command, on the other hand, it will continue running and will store its output in a file.Here's how this works. The nohup command instructs your process to ignore the SIGHUP signal that would normally shut it down. That allows you to leave time-consuming processes to complete on their own without you having to remain logged in. By default, the output of the command you are running will be left in a file named nohup.out so that you can find your data the next time you log in.To read this article in full or to leave a comment, please click here
Hyperconvergence wasn’t on Philip Lisk’s mind a decade ago, when the Bergen County Sheriff's Office started using technology from Pivot3 to store data from video surveillance cameras.“We were trying to store video in an IP world. That’s how we got to know Pivot3,” says Lisk, director of IT at the largest law enforcement agency in New Jersey’s Bergen County, which sits across the Hudson River from New York City. A 12-year veteran of the BCSO, Lisk supervises its networks and serves as the technical consultant to the entire county for video and data security.Well before the term "hyperconverged infrastructure" was coined, BCSO chose Pivot3 for its converged server and SAN solutions, engineered specifically for storing petabyte-scale video workloads. Yet as the technology matured over the last several years, and BCSO kept up with upgrades, the deployment evolved from a tactical video-centric project into an enterprise HCI platform that’s set to handle many of the agency’s IT workloads going forward, including its virtual desktop infrastructure (VDI).To read this article in full or to leave a comment, please click here
With no change at the top of the latest Top500.org supercomputer list, you need to look further down the rankings to see the real story.Top500.org published the 49th edition of its twice-yearly supercomputer league table on Monday, and once again the Chinese computers 93-petaflop Sunway TaihuLight and 33.9-petaflop Tianhe 2 lead the pack.An upgrade has doubled the performance of Switzerland's GPU-based Piz Daint to 19.6 petaflops (19.6 quadrillion floating-point operations per second), boosting it from eighth to third place and nudging five other computers down a place. The top U.S. computer, Titan, is now in fourth.To read this article in full or to leave a comment, please click here
With no change at the top of the latest Top500.org supercomputer list, you need to look further down the rankings to see the real story.Top500.org published the 49th edition of its twice-yearly supercomputer league table on Monday, and once again the Chinese computers 93-petaflop Sunway TaihuLight and 33.9-petaflop Tianhe 2 lead the pack.An upgrade has doubled the performance of Switzerland's GPU-based Piz Daint to 19.6 petaflops (19.6 quadrillion floating-point operations per second), boosting it from eighth to third place and nudging five other computers down a place. The top U.S. computer, Titan, is now in fourth.To read this article in full or to leave a comment, please click here
IT organizations are enjoying a slow but steady increase in budgets, but their capital expenses and hiring trends are essentially flat, reflecting the effect of the shift to cloud computing. That’s the takeaway from Computer Economics’ annual IT Spending and Staffing Benchmarks study for 2017/2018. The study finds that the greatest effect has been a decrease in the total amount of spending that goes toward the capital budget."Unless you are an IT equipment manufacturer, this is good news," said David Wagner, vice president of research at Computer Economics in a statement. "The cloud transition is far from over, and we're already seeing more efficient IT departments, particularly on a cost-per-user basis, which is at a new low. Business applications and network infrastructure are the top areas of new IT spending, while the data center, for the first time, is at the bottom. We take this as a sign the cloud transformation is continuing in earnest."To read this article in full or to leave a comment, please click here
The Linux history command allows users to repeat commands without retyping them and to look over a list of commands they've recently used, but that's just the obvious stuff. It is also highly configurable, allows you to pick and choose what you reuse (e.g., complete commands or portions of commands), and controls what commands are recorded.In today's post, we're going to run through the basics and then explore some of the more interesting behaviors of the history command.The basics of the Linux history command
Typing "history" and getting a list of previously entered commands is the command's most obvious use. Pressing the up arrow until you reach a command that you want to repeat and hitting enter to rerun it is next. And, as you probably know, you can also use the down arrow. In fact, you can scroll up and down your list of previously entered commands to review them or rerun them.To read this article in full or to leave a comment, please click here
Power-thirsty data centers could receive a new kind of electricity supply that uses surplus juice found in idle electric vehicles (EVs). It uses the cars’ batteries to augment a building’s grid power.Vehicle-to-grid (V2G) technology provides a way to store, then subsequently use, energy in commuters’ cars while employees work during the day. It can also supply power when a delivery fleet is parked at night, researchers say.+ Also on Network World: Smart buildings start with awareness +
Cost savings for building infrastructure could be significant in green-electricity applications, such as solar and wind, which require local electricity storage. Installing the batteries needed for green applications is expensive.To read this article in full or to leave a comment, please click here
For the first time since their debut on the market in the mid-2000s, 10 Gigabit Ethernet switches are set to lose share in the networking industry this year as service providers and hyperscale customers continue to adopt faster bandwidth 40 and 100 GbE switches, according to data from research firm IDC.IDC estimates that last year 10 GbE revenues stood at $6.15 billion, up from $5.44 billion in 2015. This year, IDC predicts 10 GbE switching revenues will fall to $5.94 billion.+MORE AT NETWORK WORLD: Nokia rolls out its first 'petabit-class' router | SD-WAN, what it is and why you'll use it one day +To read this article in full or to leave a comment, please click here
Verizon has closed on the purchase of search engine pioneer Yahoo, thus ending the independent run of one of the original internet firms that launched in the early 1990s and the reign of error of Marissa Meyer. But the company is still having a fire sale of its patent portfolio, and one of them is a unique data center design.The company announced in 2009 an unusual data center design in Lockport, New York. The building was shaped like a chicken coop and would use outside air for cooling with a flywheel-based energy storage system, and it would have an annualized PUE (Power Usage Effectiveness) of under 1.1, which was better than what Google was reporting for its data centers at the time.To read this article in full or to leave a comment, please click here
Fave Raves is an annual feature from Network World that invites enterprise IT pros to share hands-on assessments of products they love. Several IT pros raved about their favorite network tools. Here’s what they had to say, in their own words. For more enterprise favorites, check out the full Fave Raves collection.To read this article in full or to leave a comment, please click here(Insider Story)
If you keep up with technology news, you’ve been hearing a lot lately about how enterprises are moving more and more key workloads from their own on-premise data centers to the public cloud. In fact, it happens so much that even the biggest transitions are hardly news anymore. Man bites dog?
What does make people pay attention, though, is when the opposite happens: When a major product or service moves from running in the public cloud to an on-premise data center. And that’s exactly what happened last week when CNBC broke the news that Facebook plans to move its WhatsApp service from IBM’s SoftLayer public cloud service to its proprietary data centers.To read this article in full or to leave a comment, please click here
The jot command has been around for ages, but remains one of those interesting commands that a lot of Linux users never get around to using. It can be very handy in scripts as well as on the command line by generating number or character sequences, even pseudo-randomly.In its simplest form, the jot command generates a simple sequence of numbers from 1 to your selected maximum.$ jot 5
1
2
3
4
5
You can stick the jot command output into simply by redirecting it.$ jot 5 > five
$ cat five
1
2
3
4
5
If you want to start with some number other than 1, you just use a slightly different syntax. The command “jot 5 11”, for example, would create a list of five numbers starting with 11.To read this article in full or to leave a comment, please click here
This column is available in a weekly newsletter called IT Best Practices. Click here to subscribe.As organizations move more of their infrastructure to the cloud, they are ending up with hybrid cloud applications. Part of the application runs in the traditional data center, and part runs in a cloud infrastructure such as Amazon Web Services, Microsoft Azure or Google Cloud Platform. In addition, organizations often need to connect SaaS services to resources that continue to reside inside their datacenters.Applications that run in this mode typically use a connecting software gateway between the data center component and the cloud component, for example, Mule ESB or OneSaaS. This gateway allows the components to share data and work together seamlessly.To read this article in full or to leave a comment, please click here
Hewlett-Packard Enterprise (HPE) has partnered with a software-defined storage startup to create a hybrid cloud storage platform customized for HPE servers. HPE and Hedvig, started by a former Amazon and Facebook engineer credited with creating the Cassandra database, announced that HPE will offer Hedvig’s software-defined storage with HPE’s Apollo 4200 servers to create a distributed storage platform.+ Also on Network World: Software-defined storage: Users reveal the best (and worst) features +
The platform is available in 48- and 96-terabyte configurations. They are aimed at enterprises deploying private, hybrid and multi-data center clouds. Hedvig also said the combination supports private cloud storage for VMware vSphere, Microsoft Hyper-V and other hypervisors. The storage platform also supports hybrid cloud storage services running on Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform. To read this article in full or to leave a comment, please click here
It may not come as much of a surprise, but the latest numbers from International Data Corp. make it official: The server market is cratering. According to IDC, server vendor revenue plummeted 4.6 percent year over year the first quarter of 2017.The pain was widespread, IDC said, with market leader HPE seeing revenue drop 15.8 percent year over year to $2.9 billion. Number two vendor Dell was the only bright spot, notching 4.7 percent year-over-year growth to $2.4 billion (the growth may have come from Dell’s purchase of EMC’s data center business). But Cisco revenues fell 3 percent to $825 million, IBM dropped a whopping 34.7 percent to $745 million, and Lenovo tumbled 16.5 percent to $727 million.To read this article in full or to leave a comment, please click here
Cloud-native architectures will become the default option for customer-facing applications by 2020, according to a new study from IT consultancy Capgemini, Cloud Native Comes of Age. However, that move is predicated on whether the business leaders will allow it.Capgemini surveyed more than 900 senior professionals across 11 countries and found that 15 percent of new enterprise applications are cloud-native today, and that figure will jump to 32 percent by 2020.The main reasons for the shift to cloud-native apps was a desire to improve velocity (74 percent), collaboration (70 percent) and improved customer experience (67 percent). Companies with strong cloud practices leadership are taking the lead in this trend, using agile and DevOps methodologies and automated app deployment.To read this article in full or to leave a comment, please click here
An IT outage on May 27 that caused British Airways (BA) to cancel more than 400 flights and strand 75,000 passengers in one day was because of human error—and a simple one at that.An engineer had disconnected a power supply at a data center near London’s Heathrow airport, and when it was reconnected, it caused a surge of power that resulted in major damage, according to Willie Walsh, CEO of BA’s parent company IAG SA. Walsh made the comment to reporters in Mexico, and it was picked up by Bloomberg and other news outlets.+ Also on Network World: We’re learning the wrong lessons from airline IT outages +
The engineer in question had been authorized to be on site and was part of a team working at the Heathrow data center hit by the power outage. The facility is managed by CBRE Works Solutions, a U.S. property services company.To read this article in full or to leave a comment, please click here
Before the advent of sensors in cars, phones, thermostats, refrigerators and factory-floor devices, information technology and operational technology were two different worlds. The Internet of Things is changing that.Now, as a sea of data is sucked in to all kinds of devices in all sorts of places, there is an increasing need to merge IT and OT in order to collect, store and analyze information in the most cost-efficient manner possible -- all in real time. The network edge increasingly is where the action is, as these worlds come together.Enterprises now use edge computing to create "smart" buildings and cities, more efficient factory floors and unique retail customer experiences. It's a huge opportunity for vendors like IBM, Cisco, GE and HPE.To read this article in full or to leave a comment, please click here