Survey: Digital transformation can reveal network weaknesses

Digital transformation is a catch-all phrase that describes the process of using technology to modernize or even revolutionize how services are delivered to customers. Not only technology but also people and processes commonly undergo fundamental changes for the ultimate goal of significantly improving business performance.Such transformations have become so mainstream that IDC estimated that 40% of all technology spending now goes toward digital transformation projects, with enterprises spending in excess of $2 trillion on their efforts through 2019.To read this article in full, please click here

VMware Acquires Nyansa for AI-Aided Networking Analytics

VMware has been on a buying jag in the past year, and its latest planned acquisition is the Palo Alto, Calif.-based Sanjay Uppal said in the acquisition announcement. CEO and co-founder blog post: First, Nyansa can proactively predict client problems, optimize their network, better enable the behavior of critical IoT devices, and justify infrastructure changes based on actual user, network and application data. Second, you will be able to use the breadth and depth of Nyansa’s data ingestion and analysis, including packet analysis and metrics via API across multivendor wired and wireless LAN environments. Finally, the combination of Nyansa’s AI/ML capabilities with VMware’s existing analytics, visibility and remediation capabilities will make it easier for you to operate and troubleshoot the virtual cloud network and accelerate the realization of a self-healing network. Nyansa was valued at around $65 million after its most recent funding two years ago and had raised about $26.5 million, Carbon Black. The transaction is expected to close within the next few months, subject to customary closing conditions. VMware is a sponsor of The New Stack. Feature image

Managing Financial Network Threats with SD-Branch

Financial networks are at a constant risk — especially since they rely on risky IoT devices....

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Arista Networks Buys Big Switch

Several incumbent networking players, including Cisco, Dell Technologies, VMware, Juniper Networks,...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Dyndns and Openvpn – Remote Management

I have visited my home and was doing some hobby IT setup with Raspberry Pi’s, the problem is that i had problems many times accessing my home PC is another Remote Location due to many reasons, lets say crappy ISP. I contacted my ISP and they said I need to take a static IP and also pay for opening up two non standard ports. Its like you pay to get tortured and then additional headache of Port forwarding.

To add more to the pain, the IP that i get from my upstream provider is a Private IP, wow I havent seen that for a while. Anyways, to get around this I was thinking about using OPENVPN as a solution along with Dyndns.

Now, setup is very simple

Clint-pc (Location 1) ———-AWS(OPENVPN)————Client-pc (Location 2)

Why AWS -> Accessible and Cost

Problem is changing IP, I dont have any business requirement or criticality to buy a Elastic IP , but whole point will be lost if my clients wont know what to access, worse I will never have access to location-2 if am in location-1 to change IP Addresses

I have mapped OPENVPN with dyndns script.

https://help.dyn.com/ddclient/

This really solved Continue reading

Faster builds in Docker Compose 1.25.1 thanks to BuildKit Support

One of the most requested features for the docker-compose tool is definitely support for building using Buildkit which is an alternative builder with great capabilities, like caching, concurrency and ability to use custom BuildKit front-ends just to mention a few… Ahhh with a nice blue output! And the good news is that Docker Compose 1.25.1 – that was just released early January – includes BuildKit support!

BuildKit support for Docker Compose is actually achieved by redirecting the docker-compose build to the Docker CLI with a limited feature set.

Enabling Buildkit build

To enable this, we have to align some stars.

First, it requires that the Docker CLI binary present in your PATH:

$ which
docker/usr/local/bin/docker

Second, docker-compose has to be run with the environment variable COMPOSE_DOCKER_CLI_BUILD set to 1 like in:

$ COMPOSE_DOCKER_CLI_BUILD=1 docker-compose build

This instruction tells docker-compose to use the Docker CLI when executing a build. You should see the same build output, but starting with the experimental warning.

As docker-compose passes its environment variables to the Docker CLI, we can also tell the CLI to use BuildKit instead of the default builder. To accomplish that, we can execute this:

$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build

A Continue reading

Sponsored Post: Fauna, Sisu, Educative, PA File Sight, Etleap, Triplebyte, Stream

Who's Hiring? 

  • Sisu Data is looking for machine learning engineers who are eager to deliver their features end-to-end, from Jupyter notebook to production, and provide actionable insights to businesses based on their first-party, streaming, and structured relational data. Apply here.

  • Triplebyte lets exceptional software engineers skip screening steps at hundreds of top tech companies like Apple, Dropbox, Mixpanel, and Instacart. Make your job search O(1), not O(n). Apply here.

  • Need excellent people? Advertise your job here! 

Cool Products and Services

  • Level up on in-demand technologies and prep for your interviews on Educative.io, featuring popular courses like the bestselling Grokking the System Design Interview. For the first time ever, you can now sign up for a subscription to get unlimited access to every course on the platform at a discounted price through the holiday period only. You'll also get to lock in this price as long as you're a subsciber. 

  • Stateful JavaScript Apps. Effortlessly add state to your Javascript apps with FaunaDB. Generous free tier. Try now!

  • PA File Sight - Actively protect servers from ransomware, audit file access to see who is deleting files, reading files or moving files, Continue reading

Verizon Exec Launches Privafy, Challenges Firewall, SD-WAN, VPN Vendors

The security startup has about a dozen paying customers and is “working with some of the largest...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Day Two Cloud 032: The Foggy Path To Cloud Certification

Certifications are a tried and true way to boost skills and knowledge, burnish your resume, and create new opportunities. But when it comes to cloud, which certifications should you pursue? Can cloud cert programs keep up with technology churn and rapid rollout of new services? How should you study? What if you fail? We tackle all these questions and more with guest Mike Pfeiffer.

Day Two Cloud 032: The Foggy Path To Cloud Certification

Certifications are a tried and true way to boost skills and knowledge, burnish your resume, and create new opportunities. But when it comes to cloud, which certifications should you pursue? Can cloud cert programs keep up with technology churn and rapid rollout of new services? How should you study? What if you fail? We tackle all these questions and more with guest Mike Pfeiffer.

The post Day Two Cloud 032: The Foggy Path To Cloud Certification appeared first on Packet Pushers.

Red Hat, Cloud Save IBM

More than 2,000 IBM clients were using its container systems, and it signed 21 Red Hat deals in Q4...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Follower Clusters – 3 Major Use Cases for Syncing SQL & NoSQL Deployments

Follower Clusters – 3 Major Use Cases for Syncing SQL & NoSQL Deployments

Follower clusters are a ScaleGrid feature that allows you to keep two independent database systems (of the same type) in sync. Unlike cloning or replication, this allows you to maintain an active, point-in-time copy of your production data. This extra cluster, known as a follower cluster, can be leveraged for multiple use cases, including for analyzing, optimizing and testing your application performance for MongoDB, MySQL and PostgreSQL. In this blog post, we will cover the top three scenarios to leverage follower clusters for your application.

How Do Follower Clusters Differ From Replication?

Unlike a static clone, this data imports on a set schedule so your follower cluster is always in sync with your production cluster. Here are a few critical ways in which it differs from replication:

Migration from VMware NSX for vSphere to NSX-T

Migration to VMware NSX-T Data Center (NSX-T) is top of mind for customers who are on NSX for vSphere (NSX-V). Broadly speaking, there are two main methods to migrate from NSX for vSphere to NSX-T Data Center: In Parallel Migration and In Place Migration. This blog post is a high-level overview of the above two approaches to migration.

2 Methods for VMware NSX Migration

Customers could take one of two approaches for migration.

In Parallel Migration:

In this method, NSX-T infrastructure is deployed in parallel along with the existing NSX-V based infrastructure.  While some components of NSX-V and NSX-T, such as management, could coexist, compute clusters running the workloads would be running on its own hardware.  This could be net new hardware or reclaimed unused hardware from NSX-V.

Migration of the workload in this approach could take couple of different approaches.

  • Cattle:  New workloads are deployed on NSX-T and the older workloads are allowed to die over time.
  • Pets:  Lift and shift workloads over to the new NSX-T infrastructure.

In Place Migration

There is simpler method though!  A method that doesn’t require dedicated hardware.  It’s an in place migration approach.  Curious?   This method uses Continue reading

ShortestPathFirst Now Has a New, Shorter Domain Name – spfirst.net

ShortestPathFirst now has a new, shorter domain name (spfirst.net) that we intend to use for future correspondence, marketing and business initiatives. We will continue to maintain the longer domain (shortestpathfirst.net) for backward compatibility and for business continuity but for all intents and purposes will use the new domain name where possible. Stay tuned for lots …

5G and Me: And Security

In today’s uber-connected world, everyone has dealt with that little voice in the back of the...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

BrandPost: Huawei CloudCampus Triumphs over Cisco DNA, Tolly Reports

With new technologies such as automation and artificial intelligence (AI) emerging all around us, enterprise digitalization is inevitable. But however much enterprises want to transform, there is a significant cost in terms of time and technological manpower to ensure the system runs smoothly. In particular, enterprises are often faced with a variety of human factors that can hamper digitalization projects, including organizational resistance to change, lack of a clear vision, and inability to gather and leverage customer data, to name just a few.On the technical front, the challenge is in finding the right products and services to overcome the inflexibility of technology stack and development processes. Therefore, picking the most suitable and flexible solutions to meet the transformation challenges is usually the key to success. The right solution not only streamlines deployment but also makes it easier for people who are involved in the exercise – the easier the jobs, the less reluctant the organization towards the changes.To read this article in full, please click here

Vapor IO Raises $90M to Build ‘Nationwide’ Edge Computing Network

It also reached an agreement for Cloudflare to deploy its cloud services on Vapor IO’s Kinetic...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Juniper to MikroTik – MPLS and VPNv4 interop



Juniper to MikroTik – a new series

Previously, I’ve written a number of articles that compared syntax between Cisco and MikroTik and have received some great feedback on them.

As such, I decided to begin a series on Juniper to MikroTik starting with MPLS and L3VPN interop as it related to a project I was working on last year.

In the world of network engineering, learning a new syntax for a NOS can be overwhelming if you need a specific set of config in a short timeframe. The command structure for RouterOS can be a bit challenging if you are used to Juniper CLI commands.

If you’ve worked with Juniper gear and are comfortable with how to deploy that vendor, it is helpful to draw comparisons between the commands, especially if you are trying to build a network with a MikroTik and Juniper router.

Lab Overview

The lab consists of (3) Juniper P routers and (2) MikroTik PE routers. Although we did not get into L3VPN in this particular lab, the layout is the same.

A note on route-targets

It seems that the format of the route-target has some bearing on this being successful. Normally i’ll use a format like Continue reading