Several years ago, there were reports that an IBM artificial intelligence (AI) project had mimicked the brain of a cat. Being the smartass that I am, I responded on Twitter with, “You mean it spends 18 hours a day in sleep mode?”That report was later debunked, but the effort to simulate the brain continues, using new types of processors far faster and more brain-like than your standard x86 processor. IBM and the U.S. Air Force have announced one such project, while Google has its own.+ Also on Network World: Machine learning proves its worth to business +
Researchers from Google and the University of Toronto last month released an academic paper titled “One Model To Learn Them All,” and they were pretty quiet about it. What Google is proposing is a template for how to create a single machine learning model that can address multiple tasks.To read this article in full or to leave a comment, please click here
It seems more vendors are looking beyond the x86 architecture for the big leaps in performance needed to power things like artificial intelligence (AI) and machine learning. Google and IBM have their processor projects, Nvidia and AMD are positioning their GPUs as an alternative, and now Japan’s NEC has announced a vector processor accelerates that data processing by more than a factor of 50 compared to the Apache Spark cluster-computing framework. + Also on Network World: NVM Express spec updated for data-intensive operations +
The company said its vector processor, called the Aurora Vector Engine, leverages “sparse matrix” data structures to accelerate processor performance in executing machine learning tasks. Vector-based computers are basically supercomputers built specifically to handle large scientific and engineering calculations. Cray used to build them in previous decades before shifting to x86 processors. To read this article in full or to leave a comment, please click here
Artificial intelligence (AI) and blockchain are among new technologies that are driving a need for increased data center capacity, according to a telco, announcing an expansion recently.China Telecom said in a press release that these “rapidly maturing” technologies, such as machine learning and adaptive security, will propel investment in data centers. And that they are one reason for its data center-business enlargement.Interestingly, though, data centers themselves may end up using this new tech as heavily as the customers.To read this article in full or to leave a comment, please click here
The leveraged buyout of Dell that resulted in its merger with EMC and the computer giant going private was the first of what appears to be many similar moves. Private equity firms are looking to gobble up some of the enterprise giants and in the process, take them private.BMC Software, which develops IT services software and data center automation software, among many other products, is looking to merge with CA, formerly Computer Associates. BMC is owned by Bain Capital and Golden Gate Capital, so any deal to acquire CA would take the company off the public market.To read this article in full or to leave a comment, please click here
Multiple news outlets in Seattle and the tech press report that Microsoft plans to announce a significant reorganization in an effort to refocus its cloud computing division. In the process, a lot of people are going to lose their jobs.The Seattle Times, Puget Sound Business Journal, Bloomberg and TechCrunch all cite sources claiming that the news could come this week, and that could mean layoffs in the thousands. RELATED: How a giant like GE found a home in the cloud
The Seattle Times said it was unclear what groups would be affected and where they are located but that the move would be to get its sales teams to emphasize its cloud computing products instead of pushing packaged software. To read this article in full or to leave a comment, please click here
What is interconnection, and why does it matter?Interconnection is the deployment of IT traffic exchange points that integrate direct, private connections between counterparties. Interconnection is best achieved hosted in carrier-neutral data center campuses, where distributed IT components are collocated. In an age when reams of information race around the world with the click of a finger and massive transactions routinely occur several times faster than the blink of an eye, interconnection powers digital business.Interconnection is much more than successfully connecting Point A to Point B. Telephone wires pulled off that kind of simple connectivity ages ago. Today’s enterprise-grade interconnection has some key characteristics that can help take digital business to the next level:To read this article in full or to leave a comment, please click here
Cray has announced a new suite of big data and artificial intelligence (AI) software called Urika-XC for its top-of-the-line XC Series of supercomputers. Urika-XC is a set of analytics software that will let XC customers use Apache Spark, Intel’s BigDL deep learning library, Cray’s Urika graph analytics engine, and an assortment of Python-based data science tools.With the Urika-XC software suite, analytics and AI workloads can run alongside scientific modeling and simulations on Cray XC supercomputers, eliminating the need to move data between systems. Cray XC customers will be able to run converged analytics and simulation workloads across a variety of scientific and commercial endeavors, such as real-time weather forecasting, predictive maintenance, precision medicine and comprehensive fraud detection. To read this article in full or to leave a comment, please click here
Network World's Brandon Butler checks in from Las Vegas, where this week's Cisco Live is under way. The big story: Cisco's efforts to move from hardware to software, security and "intent-based networking."
A report from a data center consulting group BroadGroup says Ireland is the best place, at least in Europe, to set up a data center. It cites connectivity, taxes and active government support among the reasons.BroadGroup’s report argued Ireland’s status in the EU, as well as its “low corporate tax environment,” make it an attractive location. It also cites connectivity, as Ireland will get a direct submarine cable system from Ireland to France—bypassing the U.K.—in 2019. The country also has a high installed base of fibre and dark fibre with further deployment planned.To read this article in full or to leave a comment, please click here
In the wake of yet another ransomware attack—this time named NotPetya—I have a special message specifically for those of you working in organizations that continue to run Microsoft Windows as the operating system on either your servers or your desktops:
You are doing a terrible job and should probably be fired.
I know. That’s harsh. But it’s true. If you haven’t yet replaced Windows, across the board, you absolutely stink at your job. For years, we’ve had one trojan, worm and virus after another. And almost every single one is specifically targeting Microsoft Windows. Not MacOS. Not Linux. Not DOS. Not Unix. Windows. To read this article in full or to leave a comment, please click here
If you don’t know what DreamWorks is, you probably haven’t been to the movies for a couple decades. It’s a digital film studio that turns out critically acclaimed CGI animated movies like Shrek, Madagascar, and Kung Fu Panda, averaging about two a year since the turn of the century, and a major contributor to the cause of keeping kids occupied for a couple of hours.The creation of CGI movies is enormously demanding from a network standpoint. Animation and rendering require very low input latency and create huge files that have to be readily available, which poses technological challenges to the DreamWorks networking team.+ALSO ON NETWORK WORLD: What Cisco's new programmable switches mean for you + Trend: Colocation facilities provide tools to manage data center infrastructureTo read this article in full or to leave a comment, please click here
The Department of Energy has awarded six tech firms a total of $258 million in funding for research and development into exascale computing. The move comes as the U.S. is falling behind in the world of top supercomputers.Energy Secretary Rick Perry announced that AMD, Cray, Hewlett Packard Enterprise, IBM, Intel and Nvidia will receive financial support from the Department of Energy over the course of a three-year period. The funding will finance research and development in three main areas: hardware technology, software technology and application development.Each company will provide 40 percent of the overall project cost in addition to the government funding. The plan is for one of those companies to be able to deliver an exascale-capable supercomputer by 2021. It’s part of the DOE’s Exascale Computing Project (ECP), which in turn is part of its new PathForward program, designed to accelerate the research necessary to deploy the nation’s first exascale supercomputers.To read this article in full or to leave a comment, please click here
Inflexible IT architectures can be a barrier to organizational change. As companies embark on digital transformations aimed at improving their business, the pressure is on IT to reduce complexity and increase the efficiency of enterprise systems and applications. Fave Raves is an annual feature from Network World that invites IT pros to share hands-on assessments of products they love. Several tools that enable organizations to simplify their infrastructure and automate key tasks ranked among the favorites in 2017 and recent years. Here’s what IT pros had to say, in their own words. For more enterprise favorites, check out the full Fave Raves collection.To read this article in full or to leave a comment, please click here
This column is available in a weekly newsletter called IT Best Practices. Click here to subscribe. To state the obvious, enterprises are moving their applications to the cloud, and this movement is happening at an accelerating pace. Many technology chiefs are working under a “cloud-first policy,” which means that if an application can be deployed as a service, then that should be the first choice for the way to go.While the applications themselves are moving to the cloud, the application delivery infrastructure is still stuck in the enterprise data center. Under the existing network architecture that most enterprises still have today, all traffic comes back to the enterprise data center before going out to the cloud. The on-premises data center is where the switching and routing, security, and application delivery controllers reside. This infrastructure is architected for a bygone era when applications were all in the data center.To read this article in full or to leave a comment, please click here
Enterprises understand the advantages of colocation, but they also know that entrusting mission-critical infrastructure to third-party data centers means giving up some control over their servers.Data center Infrastructure Management (DCIM) tools can provide colocation customers with transparency into their data center's operations, to verify that providers are fulfilling the terms of their Service Level Agreements. A DCIM platform gives customers a "single pane of glass" to view the status of their IT infrastructure."Today, more colocation providers are offering their customers access to DCIM portals," explains Rhonda Ascierto, Research Director for Data centers and Critical Infrastructure at 451 Research. "Customers want to see how well a colocation facility is operating, not just rely on the SLA. A DCIM tool gives the customer visibility into data center operations, and assurance that the colocation provider is meeting their obligations."To read this article in full or to leave a comment, please click here
Vapor IO, an Austin-based data center technology startup, is launching a rather interesting collocation business by offering leased data center capacity at cellular network towers. The company’s argument is that it should offer compute and network capabilities together for maximum edge computing.The service, called Project Volutus, includes everything from site selection to rack space, power, connectivity, infrastructure management software, and remote hands. The company believes that the need for edge computing capacity will increase as things like IoT, connected and autonomous cars, augmented and virtual reality, and 5G wireless come to market and start scaling.To read this article in full or to leave a comment, please click here
The Linux column command makes it easy to display data in a columnar format -- often making it easier to view, digest, or incorporate into a report. While column is a command that's simple to use, it has some very useful options that are worth considering. In the examples in this post, you will get a feel for how the command works and how you can get it to format data in the most useful ways.By default, the column command will ignore blanks lines in the input data. When displaying data in multiple columns, it will organize the content by filling the left column first and then moving to the right. For example, a file containing numbers 1 to 12 might be displayed in this order:To read this article in full or to leave a comment, please click here
The initial efforts to bring ARM-based processors in the data center were not terribly successful. Calxeda crashed and burned spectacularly after it bet on a 32-bit processor when the rest of the world had moved on to 64-bits. And HPE initially wanted to base its Project Moonshot servers on ARM but now uses Intel Xeon and AMD Opteron.That’s because the initial uses for ARM processors were low-performance applications, like basic LAMP stacks, file and print, and storage. Instead, one company has been quietly building momentum for high performance ARM processors, and it’s not Qualcomm.Cavium, a company steeped in MIPS-based embedded processors, is bringing its considerable experience and IP to the ARM processor with its ThunderX server ecosystem. ThunderX is the whole shootin’ match, an ARMv8-A 64-bit SoC plus motherboards, both single and dual socket. In addition to hardware, Cavium offers operating systems, development environments, tools, and applications.To read this article in full or to leave a comment, please click here
Future computer systems need to be significantly faster than the supercomputers around today, scientists believe. One reason is because analyzing complex problems properly, such as climate modeling, takes increasing work. Massive quantities of calculations, performed at high speed, and delivered in mistake-free data analysis is needed for the fresh insights and discoveries expected down the road.Limitations, though, exist in current storage, processing and software, among other components.The U.S. Department of Energy’s four year $48 million Exascale Computing Project (ECP), started at the end of last year for science and national security purposes, plans to overcome those challenges. It explains some of the potential hiccups it will be running into on its Argonne National Laboratory website. Part of the project is being studied at the lab.To read this article in full or to leave a comment, please click here
Lenovo is taking on Dell EMC and HPE with its biggest portfolio refresh since it acquired IBM's x86 server business three years ago, offering a lineup of servers, switches, SAN arrays and converged systems intended to show that it's a serious contender in the data center and software-defined infrastructure market.The product launch, staged in New York Tuesday, was the first major event for Lenovo's Data Center Group, in business since April. Lenovo wants to be a global player not only for the enterprise data center but also in hyperscale computing.Lenovo is tied for third in server market share with Cisco and IBM, well behind HPE and Dell EMC, according to IDC, and has a particularly steep uphill battle ahead in North America.To read this article in full or to leave a comment, please click here