The following post is by Jeremy Rossbach, Chief Technical Evangelist at Broadcom. We thank Broadcom for being a sponsor. Hybrid work and the ongoing transformations ushered in by SD-WAN, cloud, SaaS, and SASE have created an entirely new network landscape for today’s network operations (NetOps) teams to manage and monitor. Now, the user experience is […]
With each passing year, the phrase “The network is the computer,” coined in 1984 by John Gage, director of research and co-founder of Sun Microsystems, becomes more and more true. …
Everyone wants the best network, so they say, but as usual with “bestness” goals, there’s not much agreement on how to achieve it. Do you go for best-of-breed in your equipment, or maybe for three vendors per product area, or maybe open white-box networks? A bit over three-quarters of enterprises said in 2020 that open networks and white boxes would give them best of breed options, but what’s happened since then seems like a blast from the past, vendor-wise.Let’s start with an interesting truth. Well over 95% of established enterprises who can look back to the origins of their networks say that they started off with a single-vendor network. Enterprises who launched within the last five years (which are very few) account for almost all those who didn’t start off with one vendor. If you think about it, this is logical, because starting off with networking is daunting enough without adding in the complication of network integration and the management of multiple sources of technology.To read this article in full, please click here
Today's Tech Bytes podcast gets into network automation with sponsor BackBox. BackBox’s approach to automation is to focus on network engineers and integrate automation with how they already do their jobs. BackBox works with more than 180 network and security vendors.
Today's Tech Bytes podcast gets into network automation with sponsor BackBox. BackBox’s approach to automation is to focus on network engineers and integrate automation with how they already do their jobs. BackBox works with more than 180 network and security vendors.
Global political unrest and climate change are bringing new attention to the fragility of the undersea cable networks that carry about 95% of international digital traffic.
Take a Network Break! On today's episode we discuss a record quarter for switch sales, examine Germany's mixed signals about allowing Huawei gear in its networks, and debate whether employees' frustration over Google's desk-sharing plan is just entitled whining or a legitimate complaint. Plus more IT news.
Take a Network Break! On today's episode we discuss a record quarter for switch sales, examine Germany's mixed signals about allowing Huawei gear in its networks, and debate whether employees' frustration over Google's desk-sharing plan is just entitled whining or a legitimate complaint. Plus more IT news.
Red Hat Ansible Automation Platform 2 is the next generation automation platform from Red Hat’s trusted enterprise technology experts. We are excited to announce that the Ansible Automation Platform 2.3 release includes automation controller 4.3.
In the previous blog, we saw that automation controller 4.1 provides significant performance improvements as compared to Red Hat Ansible Tower 3.8. Automation controller 4.3 is taking that one step further. We will elaborate on an important change with callback receiver workers in automation controller 4.3 and how it can have an impact on the performance.
Callback Receiver
The callback receiver is the process in charge of transforming the standard output of Ansible into serialized objects in the automation controller database. This enables reviewing and querying results from across all your infrastructure and automation. This process is I/O and CPU intensive and requires performance considerations.
Every control node in automation controller has a callback receiver process. It receives job events that result from Ansible jobs. Job events are JSON structures, created when Ansible calls the runner callback plugin hooks. This enables Ansible to capture the result of a playbook run. The job event data structures contain Continue reading
Someone in your organization may have just submitted an administrator username and password for an internal system to the wrong website. And just like that, an attacker is now able to exfiltrate sensitive data.
How did it all happen? A well crafted email.
Detecting, blocking, and mitigating the risks of phishing attacks is arguably one of the hardest challenges any security team is constantly facing.
Starting today, we are opening beta access to our new brand and anti-phishing tools directly from our Security Center dashboard, allowing you to catch and mitigate phishing campaigns targeting your organization even before they happen.
The challenge of phishing attacks
Perhaps the most publicized threat vector over the past several months has been phishing attacks. These attacks are highly sophisticated, difficult to detect, becoming more frequent, and can have devastating consequences for businesses that fall victim to them.
One of the biggest challenges in preventing phishing attacks is the sheer volume and the difficulty of distinguishing legitimate emails and websites from fraudulent ones. Even when users are vigilant, it can be hard to spot the subtle differences that attackers use to make their phishing emails and websites look convincing.
As you wake up in the morning feeling sleepy and preoccupied, you receive an urgent email from a seemingly familiar source, and without much thought, you click on a link that you shouldn't have. Sometimes it’s that simple, and this more than 30-year-old phishing method means chaos breaks loose – whether it’s your personal bank account or social media, where an attacker also begins to trick your family and friends; or at your company, with what could mean systems and data being compromised, services being disrupted, and all other subsequent consequences. Following up on our “Top 50 Most Impersonated Brands in phishing attacks” post, here are some tips to catch these scams before you fall for them.
We’re all human, and responding to or interacting with a malicious email remains the primary way to breach organizations. According to CISA, 90% of cyber attacks begin with a phishing email, and losses from a similar type of phishing attack, known as business email compromise (BEC), are a $43 billion problem facing organizations. One thing is for sure, phishing attacks are getting more sophisticated every day thanks to emerging tools like AI chatbots and the expanded usage of various communication Continue reading
In today’s digital world, security is a top priority for businesses. Whether you’re a Fortune 500 company or a startup just taking off, it’s essential to implement security measures in order to protect sensitive information. Security starts inside an organization; it starts with having Zero Trust principles that protect access to resources.
Mutual TLS (mTLS) is useful in a Zero Trust world to secure a wide range of network services and applications: APIs, web applications, microservices, databases and IoT devices. Cloudflare has products that enforce mTLS: API Shield uses it to secure API endpoints and Cloudflare Access uses it to secure applications. Now, with mTLS support for Workers you can use Workers to authenticate to services secured by mTLS directly. mTLS for Workers is now generally available for all Workers customers!
A recap on how TLS works
Before diving into mTLS, let’s first understand what TLS (Transport Layer Security) is. Any website that uses HTTPS, like the one you’re reading this blog on, uses TLS encryption. TLS is used to create private communications on the Internet – it gives users assurance that the website you’re connecting to is legitimate and any information passed to it is encrypted.
Realizing the goals of Zero Trust is a journey: moving from a world of static networking and hardware concepts to organization-based access and continuous validation is not a one-step process. This challenge is never more real than when dealing with IP addresses. For years, companies on the Internet have built hardened systems based on the idea that only users with certain IP addresses can access certain resources. This implies that IP addresses are tied with identity, which is a kluge and can actually open websites up to attack in some cases. For large companies with many origins and applications that need to be protected in a Zero Trust model, it’s important to be able to support their transition to Zero Trust using mTLS, Access, or Tunnel. To make the transition some organizations may need dedicated IP addresses.
Today we’re introducing Cloudflare Aegis: dedicated IPs that we use to send you traffic. This allows you to lock down your services and applications at an IP level and build a protected environment that is application aware, protocol aware, and even IP-aware. Aegis is available today through Early Access for Enterprise customers, and you can talk to your account team if you want Continue reading
We are thrilled to introduce an innovative new approach to secure hosted applications via Cloudflare Access without the need for any installed software or custom code on your application server. But before we dive into how this is possible, let's review why Access previously required installed software or custom code on your application server.
Protecting an application with Access
Traditionally, companies used a Virtual Private Network (VPN) to access a hosted application, where all they had to do was configure an IP allowlist rule for the VPN. However, this is a major security threat because anyone on the VPN can access the application, including unauthorized users or attackers.
We built Cloudflare Access to replace VPNs and provide the option to enforce Zero Trust policies in hosted applications. Access allows you to verify a user's identity before they even reach the application. By acting as a proxy in front of your application's hostname (e.g. app.example.com), Cloudflare enables strong verification techniques such as identity, device posture, hardkey MFA, and more. All without having to directly add SSO or Authentication logic directly into your applications.
However, since Access enforces at a hostname level, there is still a potential Continue reading
Web development teams are tasked with delivering feature-rich applications at lightning speeds. To help them, there are thousands of pre-built JavaScript libraries that they can integrate with little effort.
Not always, however, are these libraries backed with hardened security measures to ensure the code they provide is not tampered with by malicious actors. This ultimately leads to an increased risk of an application being compromised.
Starting today, tackling the risk of external JavaScript libraries just got easier. We are adding a new feature to our client side security solution: Page Shield policies. Using policies you can now ensure only allowed and vetted libraries are executed by your application by simply reviewing a checklist.
Client side libraries
There are more than 4,373 libraries available on cdnjs, a popular JavaScript repository, at the time of writing. These libraries provide access to pre-built functionality to build web applications. The screenshot below shows the most popular on the platform such as React, Vue.js and Bootstrap. Bootstrap alone, according to W3Techs, is used on more than 20% of all websites.
In addition to library repositories like cdnjs, there are thousands of plugins provided directly by SaaS platforms including from names such as Continue reading
This is an important question, with a simple answer: it depends. And the main thing it depends on is, why an organization wants an SD-WAN in the first place. Answering that goes a long way to answering the size question.The baseline assumption is that the IT department sees a need for the organization to have a iprivate WAN, rather than every site communicating solely over the public internet.This is not a trivial assumption any more. As little as a decade ago, it was standard to have a private WAN for even two or three locations, since they would most likely be sharing back-end services of some sort from a common data center. Today, no such assumption can be made. Many companies grow to have many sites without needing private connectivity among them because everything they do is hosted in one or another external cloud. And, as some organizations migrate services out of data centers, they find that they need private WAN links at fewer sites or only at their data centers.To read this article in full, please click here
After figuring out how DHCP relaying works, I decided to test it out in a lab. netlab has no DHCP configuration module (at the moment); the easiest way forward seemed to be custom configuration templates combined with a few extra attributes.
After figuring out how DHCP relaying works, I decided to test it out in a lab. netlab has no DHCP configuration module (at the moment); the easiest way forward seemed to be custom configuration templates combined with a few extra attributes.