Graphcore Builds Momentum with Early Silicon

There has been a great deal of interest in deep learning chip startup, Graphcore, since we first got the limited technical details of the company’s first-generation chip last year, which was followed by revelations about how their custom software stack can run a range of convolutional, recurrent, generative adversarial neural network jobs.

In our conversations with those currently using GPUs for large-scale training (often with separate CPU only inference clusters), we have found generally that there is great interest in all new architectures for deep learning workloads. But what would really seal the deal is something that could both training

Graphcore Builds Momentum with Early Silicon was written by Nicole Hemsoth at The Next Platform.

Dynamic Inventory: Past, Present & Future

In Red Hat Ansible Engine 2.4, we made some changes to how inventory works. We introduced a new cli tool, and added an inventory plugin type.

The goal of this plugin type, was well, to make Ansible Engine even more pluggable. All kidding aside, we wanted to provide Ansible Engine content authors and Ansible Engine users a new way to visualize their target inventory.  Using the ansible-inventory command,  the targeted inventory will be listed with the details of the hosts in that inventory, and the hosts groups.

For example:

[thaumos@ecb51a545078 /]# ansible-inventory -i ~/Development/playbooks/inventory/prod.aws_ec2.yml --list
{
    "_meta": {
        "hostvars": {
            "ec2-5x-xx-x-xxx.us-west-2.compute.amazonaws.com": {
                "AmiLaunchIndex": 2,
                "Architecture": "x86_64",
                "BlockDeviceMappings": [
                    {
                        "DeviceName": "/dev/sda1",
                        "Ebs": {
                            "AttachTime": "2017-12-13T15:40:19+00:00",
                            "DeleteOnTermination": false,
                            "Status": "attached",
                            "VolumeId": "vol-0514xxx"
                        }
                    }
                ],
                "ClientToken": "",
                "EbsOptimized": false,
                "Hypervisor": "xen",
                "ImageId": "ami-0c2aba6c",
                "InstanceId": "i-009xxxx3",
                "InstanceType": "t2.micro",
                "KeyName": "blogKey",
                "LaunchTime": "2017-12-13T15:40:18+00:00",
                "Monitoring": {
                    "State": "disabled"
                },
                "NetworkInterfaces": [
                    {
                        "Association": {
                            "IpOwnerId": "amazon",
                            "PublicDnsName": "ec2-5x-xx-x-xxx.us-west-2.compute.amazonaws.com",
                            "PublicIp": "5x.xx.x.xxx"
                        },
                        "Attachment": {
                            "AttachTime": "2017-12-13T15:40:18+00:00",
                            "AttachmentId": "eni-attach-97c4xxxx",
                            "DeleteOnTermination": true,
                            "DeviceIndex": 0,
                            "Status": "attached"
                        },
                        "Description": "",
                        "Groups": [
                            {
                                "GroupId": "sg-e63xxxxd",
                                "GroupName": "blogGroup"
                            }
                        ],
                        "Ipv6Addresses":  Continue reading

L3 routing to the hypervisor with BGP

On layer 2 networks, high availability can be achieved by:

Layer 2 networks need very little configuration but come with a major drawback in highly available scenarios: an incident is likely to bring the whole network down.2 Therefore, it is safer to limit the scope of a single layer 2 network by, for example, using one distinct network in each rack and connecting them together with layer 3 routing. Incidents are unlikely to impact a whole IP network.

In the illustration below, top of the rack switches provide a default gateway for hosts. To provide redundancy, they use an MC-LAG implementation. Layer 2 fault domains are scoped to a rack. Each IP subnet is bound to a specific rack and routing information is shared between top of the rack switches and core routers using a routing protocol like OSPF.

Legacy L2 design

There are two main issues with this design:

  1. The Continue reading

IDG Contributor Network: What can data centers learn from the New England Patriots?

When February rolls around each year, every football fan knows what’s right around the corner – it’s time for the Super Bowl! This game brings together the two best teams in the National Football League to compete for the title, with all 32 teams battling throughout the year to earn that top spot. Believe it or not, this same competition is very similar to the data center industry.In fact, the NFL and data centers have many similarities. From the fundamental skills needed to be successful, to the strong team-centric leadership required and the same competition always at the top of their league or industry, below are a few examples of how the NFL and data centers have more in common than you may think.To read this article in full, please click here

IDG Contributor Network: What can data centers learn from the New England Patriots?

When February rolls around each year, every football fan knows what’s right around the corner – it’s time for the Super Bowl! This game brings together the two best teams in the National Football League to compete for the title, with all 32 teams battling throughout the year to earn that top spot. Believe it or not, this same competition is very similar to the data center industry.In fact, the NFL and data centers have many similarities. From the fundamental skills needed to be successful, to the strong team-centric leadership required and the same competition always at the top of their league or industry, below are a few examples of how the NFL and data centers have more in common than you may think.To read this article in full, please click here

The State of the Net Today – Why we must Act now for its Future

At the Internet Society, we are worried about the state of the Internet today. This global “network of networks” is now a critical part of our daily lives. We use it to communicate and connect with our families, friends, co-workers and customers. It is the engine that powers the global economy. It is our source of entertainment, of education, and of information. The Internet brings so many opportunities to all.

But… those opportunities are now under attack from several threats:

  • Lack of trust – We now find ourselves asking key questions: how can we trust that the information we see online is accurate? How do we know we are communicating with the correct people?
  • Security of the core of the Internet – The core infrastructure that creates the network of networks is now under constant attacks. Botnets, DDoS attacks, routing attacks – the public core of the Internet needs protection.
  • The explosion of connected devices – We are connecting almost everything to the Internet, and this “Internet of Things (IoT)” is being largely connected with little concern for security.
  • The growing divide between the connected and unconnected – Over 40% of the world’s people are not connected to the Internet, Continue reading

We Just Added a New Google Cloud Platform Course to Our Video Library

Last week we added another Google Cloud Platform Course to our video Library. You can find this course, Google Cloud Platform: Networking Fundamentals, on our All Access Pass streaming site and also at ine.com.

 

Why You Should Take This Course:

Google Cloud Platform enables developers to build, test and deploy applications on Googles highly-scalable, secure, and reliable infrastructure.

 

About the Course:

This course covers specifically Google Cloud Platform Networking services. We will review the features and functions of Google Cloud Platform Networking Services so that you will understand the GCP options available.

We will also dive into GCP Networking fundamentals such as Software Defined Networking, Load Balancing, Autoscaling and Virtual Private Clouds. As an added bonus, we will also dive into identity and access management from a networking security perspective.

This course is taught by Joseph Holbrook and is 3 hours and 51 minutes long.

 

What You’ll Learn:

After taking this class students will understand what GCP Cloud services will enable their organization around networking services. Whether you’re a developer or architect, this course will help you understand the basic capabilities and some of the useful advanced features of GCP networking services and features.

 

About Continue reading

Learning to Ask Questions

A lot of folks ask me about learning theory—they don’t have the time for it, or they don’t understand why they should. This video is in answer to that question.

Oil and Gas Industry Gets GPU, Deep Learning Injection

Although oil and gas software giant, Baker Hughes, is not in the business of high performance computing, the software it creates for the world’s leading oil and gas companies requires supercomputing capabilities for some use cases and increasingly, these systems can serve double-duty for emerging deep learning workloads.

The HPC requirements make sense for an industry awash in hundreds of petabytes each year in sensor and equipment data and many terabytes per day for seismic and discovery simulations and the deep learning angle is becoming the next best way of building meaning out of so many bytes.

In effort to

Oil and Gas Industry Gets GPU, Deep Learning Injection was written by Nicole Hemsoth at The Next Platform.

For Many, Hyperconverged Is The Next Platform

There is a kind of dichotomy in the datacenter. The upstart hyperconverged storage makers will tell you that the server-storage half-bloods that they have created are inspired by the storage at Google or Facebook or Amazon Web Services, but this is not, strictly speaking, true.  Hyperscalers and cloud builders are creating completely disaggregated compute and storage, linked by vast Clos networks with incredible amounts of bandwidth. But enterprises, who operate on a much more modest scale, are increasingly adopting hyperconverged storage – which mixes compute and storage on the same virtualized clusters.

One camp is splitting up servers and storage,

For Many, Hyperconverged Is The Next Platform was written by Timothy Prickett Morgan at The Next Platform.