Verizon Taps Cisco for NFV Services Push
The partnership enables Verizon to address Cisco-specific customer needs and provide an ecosystem...
The partnership enables Verizon to address Cisco-specific customer needs and provide an ecosystem...
Another long night. I was working on my perfect, bug-free program in C, when the predictable thing happened:
$ clang skynet.c -o skynet
$ ./skynet.out
Segmentation fault (core dumped)
Oh, well... Maybe I'll be more lucky taking over the world another night. But then it struck me. My program received a SIGSEGV signal and crashed with "Segmentation Fault" message. Where does the "V" come from?
Did I read it wrong? Was there a "Segmentation Vault?"? Or did Linux authors make a mistake? Shouldn't the signal be named SIGSEGF?
I asked my colleagues and David Wragg quickly told me that the signal name stands for "Segmentation Violation". I guess that makes sense. Long long time ago, computers used to have memory segmentation. Each memory segment had defined length - called Segment Limit. Accessing data over this limit caused a processor fault. This error code got re-used by newer systems that used paging. I think the Intel manuals call this error "Invalid Page Fault". When it's triggered it gets reported to the userspace as a SIGSEGV signal. End of story.
Or is it?
Martin Levy pointed me to an ancient Version 6th UNIX documentation on "signal". This is Continue reading
One of the readers commenting the ideas in my Disaster Recovery and Failure Domains blog post effectively said “In an active/passive DR scenario, having L3 DCI separation doesn’t protect you from STP loop/flood in your active DC, so why do you care?”
He’s absolutely right - if you have a cold disaster recovery site, it doesn’t matter if it’s bombarded by a gazillion flooded packets per second… but how often do you have a cold recovery site?
Verizon has joined The Climate Pledge—a commitment co-founded by Amazon and Global Optimism to...
Broadcom today announced solutions to accelerate decision making across multiple business and...
AT&T and T-Mobile US are set to slash thousands of jobs; VMware sparked a SASE Debate; and...
“As the quarter progressed, we saw a drop-off in deals, especially in the industries most...
Michael Mullany analyzed 20 years of Gartner hype cycles and got some (expected but still interesting) conclusions including:
Enjoy the reading, and keep these lessons in mind the next time you’ll be sitting in a software-defined, intent-based or machine-learning $vendor presentation.
My blog was at https://r2079.wordpress.com and its now moved to https://r2079.com. Why this change?
First and Foremost – Thrill and Challenge
Secondly – Customization and Cost
Don’t get me wrong, I dint migrate because I wanted to get into web development, its not the case and Am not even at intermediate Level there!
Why – This is a custom domain. This is hosted with Route53 Amazon, WordPress is build on AWS custom instance. The Reasons are very simple
So, This is where it is, I will try to maintain the website now and see how this goes, Till now Infrastructure was maintained and patched by WordPress , from now probably i have to take care of it.
Monitoring is the topic for Day Two Cloud. Before you skip because you think it's boring, this conversation may change your mind. We dig into what's necessary to effectively monitor cloud-native and microservices applications to help you run infrastructure smoothly, improve troubleshooting, and anticipate issues before they affect performance or services. Our guest is Josh Barratt, Senior Principal Engineer at Twilio.
The post Day Two Cloud 053: Effectively Monitoring Cloud-Native Applications appeared first on Packet Pushers.
The new firewall embeds machine learning in the core of the firewall to stop threats, secure IoT...
"The optics are definitely bad," noted William Ho of 556 Ventures, citing the broader economic...
Unexpected challenges, the pivot to remote work, the lasting impact of the pandemic, and the fight...
The indomitable Greg Ferro joins this episode of the Hedge to talk about the path from automated to autonomic, including why you shouldn’t put everything into “getting automation right,” and why you still need to know the basics even if we reach a completely autonomic world.
We introduced VMware NSX to the market over seven years ago. The platform has helped thousands of customers worldwide transform their network and bring the public cloud experience on-premises. This resulted in higher levels of automation and insight, which, in turn, saved time and money. However, as customers continued to drive new use cases and requirements, we wanted to ensure NSX was completely future-ready; hence NSX-T was born.
NSX-T is the next generation of network virtualization and security platform with a complete L2-L7 suite of services, delivering switching, routing, firewalling, analytics, and load balancing entirely in software. Unlike NSX-V, NSX-T supports a variety of heterogeneous endpoints such as VMs, containers, and bare metal servers. The platform enables a wide range of use-cases in intrinsic security, networking and security for multi-cloud and modern apps, and network automation. The past few releases delivered many new networking and security innovations on NSX-T, prominent among these are the ultimate crown jewels of the platform – NSX Intelligence, Federation, and NSX Distributed IDS/IPS.
Migrating from NSX for vSphere to NSX-T is top of mind for customers that need to transition. Here are answers to some questions that you, Continue reading
The composable systems trend has taken root in some of the world’s largest datacenters, most notably among hyperscale companies, but has been less quick to catch on in traditional high performance computing (HPC) environments. …
Composing ’Expanse’: Building Blocks for Future HPC was written by Nicole Hemsoth at The Next Platform.
This is the second part in a series of posts where we show how to use Docker to define your Go development environment in code. The goal of this is to make sure that you, your team, and the CI are all using the same environment. In part 1, we explained how to start a containerized development environment for local Go development, building an example CLI tool for different platforms and shrinking the build context to speed up builds. Now we are going to go one step further and learn how to add dependencies to make the project more realistic, caching to make the builds faster, and unit tests.
Go program from part 1 is very simple and doesn’t have any dependencies Go dependencies. Let’s add a simple dependency – the commonly used github.com/pkg/errors
package:
package main
import (
"fmt"
"os"
"strings"
"github.com/pkg/errors"
)
func echo(args []string) error {
if len(args) < 2 {
return errors.New("no message to echo")
}
_, err := fmt.Println(strings.Join(args[1:], " "))
return err
}
func main() {
if err := echo(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "%+v\n", err)
os.Exit(1)
}
}
Our example program is now a simple echo program that writes out the arguments that the user inputs or “no message to echo” and a stack trace if nothing is specified.
We will use Go modules to handle this dependency. Running the following commands will create the go.mod
and go.sum
files:
$ go mod init
$ go mod tidy
Now when we run the build, we will see that each time we build, the dependencies are downloaded
$ make
[+] Building 8.2s (7/9)
=> [internal] load build definition from Dockerfile
...
0.0s
=> [build 3/4] COPY . .
0.1s
=> [build 4/4] RUN GOOS=darwin GOARCH=amd64 go build -o /out/example .
7.9s
=> => # go: downloading github.com/pkg/errors v0.9.1
This is clearly inefficient and slows things down. We can fix this by downloading our dependencies as a separate step in our Dockerfile:
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
ARG TARGETOS
ARG TARGETARCH
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Notice that we’ve added the go.* files and download the modules before adding the rest of the source. This allows Docker to cache the modules as it will only rerun these steps if the go.* files change.
Separating the downloading of our dependencies from our build is a great improvement but each time we run the build, we are starting the compile from scratch. For small projects this might not be a problem but as your project gets bigger you will want to leverage Go’s compiler cache.
To do this, you will need to use BuildKit’s Dockerfile frontend (https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md). Our updated Dockerfile is as follows:
# syntax = docker/dockerfile:1-experimental
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
ARG TARGETOS
ARG TARGETARCH
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Notice the # syntax at the top of the Dockerfile that selects the experimental Dockerfile frontend and the –mount option attached to the run command. This mount option means that each time the go build command is run, the container will have the cache mounted to Go’s compiler cache folder.
Benchmarking this change for the example binary on a 2017 MacBook Pro 13”, I see that a small code change takes 11 seconds to build without the cache and less than 2 seconds with it. This is a huge improvement!
All projects need tests! We’ll add a simple test for our echo function in a main_test.go
file:
package main
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestEcho(t *testing.T) {
// Test happy path
err := echo([]string{"bin-name", "hello", "world!"})
require.NoError(t, err)
}
func TestEchoErrorNoArgs(t *testing.T) {
// Test empty arguments
err := echo([]string{})
require.Error(t, err)
}
This test ensures that we get an error if the echo function is passed an empty list of arguments.
We will now want another build target for our Dockerfile so that we can run the tests and build the binary separately. This will require a refactor into a base stage and then unit-test and build stages:
# syntax = docker/dockerfile:1-experimental
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM base AS unit-test
RUN --mount=type=cache,target=/root/.cache/go-build \
go test -v .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Note that Go test uses the same cache as the build so we mount the cache for this stage too. This allows Go to only run tests if there have been code changes which makes the tests run quicker.
We can also update our Makefile to add a test target:
all: bin/example
test: lint unit-test
PLATFORM=local
.PHONY: bin/example
bin/example:
@docker build . --target bin \
--output bin/ \
--platform ${PLATFORM}
.PHONY: unit-test
unit-test:
@docker build . --target unit-test
In this post we have seen how to add Go dependencies efficiently, caching to make the build faster and unit tests to our containerized Go development environment. In the next and final post of the series, we are going to complete our journey and learn how to add a linter, set up a GitHub Actions CI, and some extra build optimizations.
You can find the finalized source this example on my GitHub: https://github.com/chris-crone/containerized-go-dev
You can read more about the experimental Dockerfile syntax here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
If you’re interested in build at Docker, take a look at the Buildx repository: https://github.com/docker/buildx
Read the whole blog post series here.
The post Containerize your Go Developer Environment – Part 2 appeared first on Docker Blog.