Samsung demos 512GB DDR5 memory aimed at supercomputing, AI workloads

Samsung Electronics last month announced the creation of a 512GB DDR5 memory module, its first since the JEDEC consortium developed and released the DDR5 standard in July of last year.The new modules are double the max capacity of existing DDR4 and offer up to 7,200Mbps in data transfer rate, double that of conventional DDR4. The memory will be able to handle high-bandwidth workloads in applications such as supercomputing, artificial intelligence, machine learning, and data analytics, the company says. Read more: World's fastest supercomputers: Fugaku still No. 1To read this article in full, please click here

How to shop for a colocation provider

If you want to move assets out of your data center but for whatever reason can’t shift to the cloud, a colocation, or “colo” for short, is increasingly a viable option.Colo is where the client buys the compute, storage, and networking equipment but instead of putting it into their own data centers, they put them in the data center of a hosting company. They still own and manage the hardware, but they don’t have responsibility for manage the facilities—heating, cooling, lighting, physical security, etcNow see "How to manage your power bill while adopting AI" As such, colocation facilities attract considerable interest from enterprises. IDC puts the 2020 US colocation market at $9 billion, growing to $12.2 billion by 2024 for a compound annual growth rate (CAGR) of 8%. Grand View Research estimates the global data-center colocation market size was valued at $40.31 billion US dollars in 2019 and is expected to grow at a CAGR of 12.9% from 2020 to 2027. Gartner makes the bravest prediction, saying that by 2025, 85% of infrastructure strategies will integrate on-premises, colocation, cloud, and edge delivery options, compared with 20% in 2020.To read this article in full, please click here

Samsung demos 512GB DDR5 memory aimed at supercomputing, AI workloads

Samsung Electronics last month announced the creation of a 512GB DDR5 memory module, its first since the JEDEC consortium developed and released the DDR5 standard in July of last year.The new modules are double the max capacity of existing DDR4 and offer up to 7,200Mbps in data transfer rate, double that of conventional DDR4. The memory will be able to handle high-bandwidth workloads in applications such as supercomputing, artificial intelligence, machine learning, and data analytics, the company says. Read more: World's fastest supercomputers: Fugaku still No. 1To read this article in full, please click here

How to shop for a colocation provider

If you want to move assets out of your data center but for whatever reason can’t shift to the cloud, a colocation, or “colo” for short, is increasingly a viable option.Colo is where the client buys the compute, storage, and networking equipment but instead of putting it into their own data centers, they put them in the data center of a hosting company. They still own and manage the hardware, but they don’t have responsibility for manage the facilities—heating, cooling, lighting, physical security, etcNow see "How to manage your power bill while adopting AI" As such, colocation facilities attract considerable interest from enterprises. IDC puts the 2020 US colocation market at $9 billion, growing to $12.2 billion by 2024 for a compound annual growth rate (CAGR) of 8%. Grand View Research estimates the global data-center colocation market size was valued at $40.31 billion US dollars in 2019 and is expected to grow at a CAGR of 12.9% from 2020 to 2027. Gartner makes the bravest prediction, saying that by 2025, 85% of infrastructure strategies will integrate on-premises, colocation, cloud, and edge delivery options, compared with 20% in 2020.To read this article in full, please click here

Will open networking lock you in?

There’s open, then there’s open.  At least that seems to be the case with network technology. Maybe it’s the popularity and impact of open-source software, or maybe it’s just that the word “open” makes you think of being wild, happy, and free—whatever it is, the concept of openness in networking is catching on. Which means, of course, that the definition is getting fuzzier every day.When I talk with enterprises, they seem to think that openness in networking is the opposite of proprietary, which they then define is a technology for which there is a single source. That suggests that open networking is based on technology for which multiple sources exist, but as logical as that sounds, it may not help much.To read this article in full, please click here

Major League Baseball makes a run at network visibility

Major League Baseball is taking network visibility to the next level.“There were no modern network-management systems in place before I came in. It was all artisanally handcrafted configurations,” says Jeremy Schulman, who joined MLB two years ago as principal network-automation software engineer. Tech Spotlight: Analytics Analytics in the cloud: Key challenges and how to overcome them (CIO) Collaboration analytics: Yes, you can track employees. Should you? (Computerworld) How data poisoning attacks corrupt machine learning models (CSO) How to excel with data analytics (InfoWorld) Major League Baseball makes a run at network visibility (Network World) Legacy systems, including PRTG for SNMP-based monitoring and discrete management tools from network vendors, allowed MLB to collect data from switches and routers, for example, and track metrics such as bandwidth usage. But the patchworked tools were siloed and didn’t provide comprehensive visibility.To read this article in full, please click here

Major League Baseball makes a run at network visibility

Major League Baseball is taking network visibility to the next level.“There were no modern network-management systems in place before I came in. It was all artisanally handcrafted configurations,” says Jeremy Schulman, who joined MLB two years ago as principal network-automation software engineer. Tech Spotlight: Analytics Analytics in the cloud: Key challenges and how to overcome them (CIO) Collaboration analytics: Yes, you can track employees. Should you? (Computerworld) How data poisoning attacks corrupt machine learning models (CSO) How to excel with data analytics (InfoWorld) Major League Baseball makes a run at network visibility (Network World) Legacy systems, including PRTG for SNMP-based monitoring and discrete management tools from network vendors, allowed MLB to collect data from switches and routers, for example, and track metrics such as bandwidth usage. But the patchworked tools were siloed and didn’t provide comprehensive visibility.To read this article in full, please click here

Start Automating Public Cloud Deployments with Infrastructure-as-Code

One of my readers sent me a series of “how do I get started with…” questions including:

I’ve been doing networking and security for 5 years, and now I am responsible for our cloud infrastructure. Anything to do with networking and security in the cloud is my responsibility along with another team member. It is all good experience but I am starting to get concerned about not knowing automation, IaC, or any programming language.

No need to worry about that, what you need (to start with) is extremely simple and easy-to-master. Infrastructure-as-Code is a simple concept: infrastructure configuration is defined in machine-readable format (mostly text files these days) and used by a remediation tool like Terraform that compares the actual state of the deployed infrastructure with the desired state as defined in the configuration files, and makes changes to the actual state to bring it in line with how it should look like.

Start Automating Public Cloud Deployments with Infrastructure-as-Code

One of my readers sent me a series of “how do I get started with…” questions including:

I’ve been doing networking and security for 5 years, and now I am responsible for our cloud infrastructure. Anything to do with networking and security in the cloud is my responsibility along with another team member. It is all good experience but I am starting to get concerned about not knowing automation, IaC, or any programming language.

No need to worry about that, what you need (to start with) is extremely simple and easy-to-master. Infrastructure-as-Code is a simple concept: infrastructure configuration is defined in machine-readable format (mostly text files these days) and used by a remediation tool like Terraform that compares the actual state of the deployed infrastructure with the desired state as defined in the configuration files, and makes changes to the actual state to bring it in line with how it should look like.

Troubleshooting steps


Introduction

Troubleshooting network issues is one of the common skills of every network engineer.  And usually, we don’t think about it. We don’t study and train this skill especially. I tell about troubleshooting as a formal process. We just get experience from our daily routine or follow company workflow. I will try to formalize some basic notions. Hope it will be helpful. 

Of course, it depends on the situation and business constraints but when we try to resolve some issue we should follow the next steps:

Preparing -> Information-gathering -> Isolating -> Resolving -> Escalating

Let's look at every step.

Preparing

Every network has infrastructure tools (monitoring, inventory, etc), but we should continuously improve and keep up to date them. Try to develop and integrate a new one. This stack of tools is our source of truth. If we have it, we can easily fetch a full amount of information before, during, and after problems. It’s an enormous topic but without these tools, we can’t successfully troubleshoot our network.

Mandatory tools:

  • Syslog (at least simple Syslog server. And good to have e.g. Elastic stack)
  • Alarm management system (e.g. Zabbix)
  • Statistics collector (e. Continue reading

Developers, Developers, Developers: Welcome to Developer Week 2021

Developers, Developers, Developers: Welcome to Developer Week 2021
Developers, Developers, Developers: Welcome to Developer Week 2021

Runtimes, serverless, edge compute, containers, virtual machines, functions, pods, virtualenv. All names for things developers need to go from writing code to running code. It’s a painful reality that for most developers going from code they’ve written to code that actually runs can be hard.

Excruciatingly, software development is made hard by dependencies on modules, by scaling, by security, by cost, by availability, by deployment, by builds, and on and on. All the ugly reality of crystallizing thoughts into lines of code that actually run, successfully, somewhere, more than once, non-stop, and at scale.

And so… Welcome to Developer Week 2021!

Like we have done in previous Innovation Weeks (such as Security Week or Privacy Week), we will be making many (about 20) announcements of products and features to make developers’ lives easier. And by easy I mean removing the obstacles that stop you, dear developer, from writing code and deploying it so it scales to Internet size.

And Cloudflare Workers, our platform for software developers who want to deploy Internet-facing applications that start instantly and scale Internetly, has been around since 2017 (or to put it in perspective, since iPhone 8) and helping developers code and deploy in seconds Continue reading

AWS Cloud Development Kit: Now I Get It

The AWS Cloud Development Kit (CDK) is an "open source software development framework to define your cloud application resources using familiar programming languages". When CDK launched in 2019, I remember reading the announcement and thinking, "Ok, AWS wants their own Terraform-esque tool. No surprise given how popular Terraform is." Months later, my friend and colleague Matt M. was telling me how he was using CDK in a project he was working on and how crazy cool it was.

I finally decided to give CDK a go for one of my projects. Here is what I discovered.

A Near 2 year Part Time Project Done – A sustaining Model!

Ever since I got interested in plants getting some sort of metrics has been a part time obsession.

Iteration 1 – No wireless and no outdoor model with always on usb power.

Iteration 2 – Learnt about ESP8266 microcontroller and deep sleep feature

Iteration 3 – Saving battery through deep sleep and battery power instead of usb mains, Adding ESP32 Microcontroller.

Iteration 4 – Study about Lithium Ion batteries

Iteration 5 – Making model wireless and usb free power, running on batteries

Iteration 6 – Containerising the entire software and integration with AWS and Telnyx

Iteration 7 – Making the model sustaining on itself through solar power and making it weather resistant

This completes an End to End IOT Model with a micro controller , a moisture sensor and two lithium ION batteries which get charged based on a small solar panel. Am going to extend this to LoRa Wan and will try to achieve ultra low power long distance.

The idea is that there is an allotment 6 kms from the place I live and I will see if AWS and LoRa Wan Supports me for protocol needs.

Docker containers associated with this project

Grafana Dashboard – Retrieving data Continue reading