Palo Alto Networks Rolls ML Into Firewall, Containerizes It
The new firewall embeds machine learning in the core of the firewall to stop threats, secure IoT...
The new firewall embeds machine learning in the core of the firewall to stop threats, secure IoT...
"The optics are definitely bad," noted William Ho of 556 Ventures, citing the broader economic...
Unexpected challenges, the pivot to remote work, the lasting impact of the pandemic, and the fight...
The indomitable Greg Ferro joins this episode of the Hedge to talk about the path from automated to autonomic, including why you shouldn’t put everything into “getting automation right,” and why you still need to know the basics even if we reach a completely autonomic world.
We introduced VMware NSX to the market over seven years ago. The platform has helped thousands of customers worldwide transform their network and bring the public cloud experience on-premises. This resulted in higher levels of automation and insight, which, in turn, saved time and money. However, as customers continued to drive new use cases and requirements, we wanted to ensure NSX was completely future-ready; hence NSX-T was born.
NSX-T is the next generation of network virtualization and security platform with a complete L2-L7 suite of services, delivering switching, routing, firewalling, analytics, and load balancing entirely in software. Unlike NSX-V, NSX-T supports a variety of heterogeneous endpoints such as VMs, containers, and bare metal servers. The platform enables a wide range of use-cases in intrinsic security, networking and security for multi-cloud and modern apps, and network automation. The past few releases delivered many new networking and security innovations on NSX-T, prominent among these are the ultimate crown jewels of the platform – NSX Intelligence, Federation, and NSX Distributed IDS/IPS.
Migrating from NSX for vSphere to NSX-T is top of mind for customers that need to transition. Here are answers to some questions that you, Continue reading
The composable systems trend has taken root in some of the world’s largest datacenters, most notably among hyperscale companies, but has been less quick to catch on in traditional high performance computing (HPC) environments. …
Composing ’Expanse’: Building Blocks for Future HPC was written by Nicole Hemsoth at The Next Platform.
This is the second part in a series of posts where we show how to use Docker to define your Go development environment in code. The goal of this is to make sure that you, your team, and the CI are all using the same environment. In part 1, we explained how to start a containerized development environment for local Go development, building an example CLI tool for different platforms and shrinking the build context to speed up builds. Now we are going to go one step further and learn how to add dependencies to make the project more realistic, caching to make the builds faster, and unit tests.
Go program from part 1 is very simple and doesn’t have any dependencies Go dependencies. Let’s add a simple dependency – the commonly used github.com/pkg/errors package:
package main
import (
"fmt"
"os"
"strings"
"github.com/pkg/errors"
)
func echo(args []string) error {
if len(args) < 2 {
return errors.New("no message to echo")
}
_, err := fmt.Println(strings.Join(args[1:], " "))
return err
}
func main() {
if err := echo(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "%+v\n", err)
os.Exit(1)
}
}
Our example program is now a simple echo program that writes out the arguments that the user inputs or “no message to echo” and a stack trace if nothing is specified.
We will use Go modules to handle this dependency. Running the following commands will create the go.mod and go.sum files:
$ go mod init
$ go mod tidy
Now when we run the build, we will see that each time we build, the dependencies are downloaded
$ make
[+] Building 8.2s (7/9)
=> [internal] load build definition from Dockerfile
...
0.0s
=> [build 3/4] COPY . .
0.1s
=> [build 4/4] RUN GOOS=darwin GOARCH=amd64 go build -o /out/example .
7.9s
=> => # go: downloading github.com/pkg/errors v0.9.1
This is clearly inefficient and slows things down. We can fix this by downloading our dependencies as a separate step in our Dockerfile:
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
ARG TARGETOS
ARG TARGETARCH
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Notice that we’ve added the go.* files and download the modules before adding the rest of the source. This allows Docker to cache the modules as it will only rerun these steps if the go.* files change.
Separating the downloading of our dependencies from our build is a great improvement but each time we run the build, we are starting the compile from scratch. For small projects this might not be a problem but as your project gets bigger you will want to leverage Go’s compiler cache.
To do this, you will need to use BuildKit’s Dockerfile frontend (https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md). Our updated Dockerfile is as follows:
# syntax = docker/dockerfile:1-experimental
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
ARG TARGETOS
ARG TARGETARCH
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Notice the # syntax at the top of the Dockerfile that selects the experimental Dockerfile frontend and the –mount option attached to the run command. This mount option means that each time the go build command is run, the container will have the cache mounted to Go’s compiler cache folder.
Benchmarking this change for the example binary on a 2017 MacBook Pro 13”, I see that a small code change takes 11 seconds to build without the cache and less than 2 seconds with it. This is a huge improvement!
All projects need tests! We’ll add a simple test for our echo function in a main_test.go file:
package main
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestEcho(t *testing.T) {
// Test happy path
err := echo([]string{"bin-name", "hello", "world!"})
require.NoError(t, err)
}
func TestEchoErrorNoArgs(t *testing.T) {
// Test empty arguments
err := echo([]string{})
require.Error(t, err)
}
This test ensures that we get an error if the echo function is passed an empty list of arguments.
We will now want another build target for our Dockerfile so that we can run the tests and build the binary separately. This will require a refactor into a base stage and then unit-test and build stages:
# syntax = docker/dockerfile:1-experimental
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM base AS unit-test
RUN --mount=type=cache,target=/root/.cache/go-build \
go test -v .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Note that Go test uses the same cache as the build so we mount the cache for this stage too. This allows Go to only run tests if there have been code changes which makes the tests run quicker.
We can also update our Makefile to add a test target:
all: bin/example
test: lint unit-test
PLATFORM=local
.PHONY: bin/example
bin/example:
@docker build . --target bin \
--output bin/ \
--platform ${PLATFORM}
.PHONY: unit-test
unit-test:
@docker build . --target unit-test
In this post we have seen how to add Go dependencies efficiently, caching to make the build faster and unit tests to our containerized Go development environment. In the next and final post of the series, we are going to complete our journey and learn how to add a linter, set up a GitHub Actions CI, and some extra build optimizations.
You can find the finalized source this example on my GitHub: https://github.com/chris-crone/containerized-go-dev
You can read more about the experimental Dockerfile syntax here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
If you’re interested in build at Docker, take a look at the Buildx repository: https://github.com/docker/buildx
Read the whole blog post series here.
The post Containerize your Go Developer Environment – Part 2 appeared first on Docker Blog.
Ansible Playbooks are very easy to read and their linear execution makes it simple to understand what will happen while a playbook is executing. Unfortunately, in some circumstances, the things you need to automate may not function in a linear fashion. For example, I was once asked to perform the following tasks with Ansible:
While the request sounded simple, upon further investigation it would prove more challenging for the following reasons:

Define Time to Fun
The post Dictionary: Time to Fun appeared first on EtherealMind.
CEO Chuck Robbins didn't provide specifics but said Cisco is committed to hiring and promoting...
Networking Conferences play a big role in many of our professional lives and Cisco Live is the biggest one when it comes to networking. This year being what it is, we’re seeing our favorite in person events being transformed into virtual, online-only events. Considering how much community can play a role in how these events make an impact, we figured we would wax nostalgic on past events, share some favorite memories, and explore how this one event came to be significant for so many of us.
Rather than a single continuous conversation made up of a group of talking heads, we’ve recorded this episode in segments. Each one focusing on one person’s experiences with the conference. Also, due to the length, we’ve split this episode into two parts. This is part 2. Part 1 should also be available in your podcatcher, or if you’re listening to this on our website you can find part 1 here.
![]() |
A considerable thank you to Unimus for sponsoring today’s episode. Unimus is a fast to deploy and easy to use Network Automation and Configuration Management solution. You can learn more about how you can start automating your network in under 15 minutes at unimus. Continue reading |
Networking Conferences play a big role in many of our professional lives and Cisco Live is the biggest one when it comes to networking. This year being what it is, we’re seeing our favorite in person events being transformed into virtual, online-only events. Considering how much community can play a role in how these events make an impact, we figured we would wax nostalgic on past events, share some favorite memories, and explore how this one event came to be significant for so many of us.
Rather than a single continuous conversation made up of a group of talking heads, we’ve recorded this episode in segments. Each one focusing on one person’s experiences with the conference. Also, due to the length, we’ve split this episode into two parts. This is part 1. Part 2 should also be available in your podcatcher, or if you’re listening to this on our website you can find part 2 here.
![]() |
A considerable thank you to Unimus for sponsoring today’s episode. Unimus is a fast to deploy and easy to use Network Automation and Configuration Management solution. You can learn more about how you can start automating your network in under 15 minutes at unimus. Continue reading |


I am in my third year at Northeastern University, pursuing an undergraduate degree in Marketing and Psychology. Five months ago I joined Cloudflare as an intern on the APAC Marketing team in the beautiful Singapore office. When searching for internships Cloudflare stood out as a place I could gain skills in marketing, learn from amazing mentors, and have space to take ownership in projects. As a young, but well-established company, Cloudflare provides the resources for their interns to work cross functionally and creatively and truly be a part of the exponential growth of the company.
Earlier this week, I hopped on a virtual meeting with a few coworkers, thinking everything was set to record a webinar. As I shared my screen to explain how to navigate the platform I realised the set up was incorrect and we couldn’t start on time. Due to the virtual nature of the meeting, my coworkers didn’t see the panic on my face and had no idea what was going on. I corrected the issue and set up an additional trial run session, issuing apologies to both coworkers. They both took it in stride and expressed that it happens to the Continue reading
In highload cloud networks there is so much traffic that even 100G/400G port speeds do not suffice, so sharing the load over multiple links is the only feasible solution.

ECMP stands for Equal Cost Multi-Path – when a route …
Read on for insights from educators, parents, and institutional leaders on what we’ve learned...
Microsoft Corp. and SAS announced a strategic partnership. The two companies will enable customers...
While discussing SD-WAN and VMware's emerging SASE offering, COO Rajiv Ramaswami open a can of...
Xilinx, introduced two real-time computing video appliances for easy-to-scale, ultra-high-density...