This is a guest post by Videet Parekh, Abelardo Lopez-Lagunas, Sek Chai at Latent AI.
Edge networks present a significant opportunity for Artificial Intelligence (AI) performance and applicability. AI technologies already make it possible to run compelling applications like object and voice recognition, navigation, and recommendations.
AI at the edge presents a host of benefits. One is scalability—it is simply impractical to send all data to a centralized cloud. In fact, one study has predicted a global scope of 90 zettabytes generated by billions of IoT devices by 2025. Another is privacy—many users are reluctant to move their personal data to the cloud, whereas data processed at the edge are more ephemeral.
When AI services are distributed away from centralized data centers and closer to the service edge, it becomes possible to enhance the overall application speed without moving data unnecessarily. However, there are still challenges to make AI from the deep-cloud run efficiently on edge hardware. Here, we use the term deep-cloud to refer to highly centralized, massively-sized data centers. Deploying edge AI services can be hard because AI is both computational and memory bandwidth intensive. We need to tune the AI models so the computational latency and bandwidth Continue reading
This podcast introduction was written by Nick Buraglio, the host of today’s podcast.
In the original days of this podcast, there were heavy, deep discussions about this new protocol called “OpenFlow”. Like many of our most creative innovations in the IT field, OpenFlow came from an academic research project that aimed to change the way that we as operators managed, configured, and even thought about networking fundamentals.
For the most part, this project did what it intended, but once the marketing machine realized the flexibility of the technology and its potential to completely change the way we think about vendors, networks, provisioning, and management of networking, they were off to the races.
We all know what happened next.
This podcast introduction was written by Nick Buraglio, the host of today’s podcast.
In the original days of this podcast, there were heavy, deep discussions about this new protocol called “OpenFlow”. Like many of our most creative innovations in the IT field, OpenFlow came from an academic research project that aimed to change the way that we as operators managed, configured, and even thought about networking fundamentals.
For the most part, this project did what it intended, but once the marketing machine realized the flexibility of the technology and its potential to completely change the way we think about vendors, networks, provisioning, and management of networking, they were off to the races.
We all know what happened next.
This week, Trump's opponents misunderstood a Regeneron press release to conclude that the REG-COV2 treatment (which may have saved his life) was created from stem cells. When that was proven false, his opponents nonetheless deliberately misinterpreted events to conclude there was still an ethical paradox. I've read the scientific papers and it seems like this is an issue that can be understood with basic high-school science, so I thought I'd write up a detailed discussion.
The short answer is this:
It’s no secret that traditional firewalls are ill–suited to securing east-west traffic. They’re static, inflexible, and require hair-pinning traffic around the data center. Traditional firewalls have no understanding of application context, resulting in rigid, static policies, and they don’t scale—so they’re unable to handle the massive workloads that make up modern data center traffic. As a result, many enterprises are forced to selectively secure workloads in the data center, creating gaps and blind spots in an organization’s security posture.
A software-based approach to securing east-west traffic changes the dynamic. Instead of hair-pinning traffic, VMware NSX Service-defined Firewall (SDFW) applies security policies to all workloads inside the data center, regardless of the underlying infrastructure. This provides deep context into every single workload.
Anyone interested in learning how the Service-defined Firewall can help them implement micro–segmentation and network segmentation, replace legacy physical hardware, or meet growing compliance needs and stop the lateral spread of threats, should check out the following sessions:
Creating Virtual Security Zones with NSX Firewall Continue reading
Compliance is more than a necessary evil. Sure, it’s complex, expensive, and largely driven by manual processes, but it’s also a business enabler. Without the ability to prove compliance, you wouldn’t be able to sell your products in certain markets or industries. But meeting compliance requirements can’t be cost-prohibitive: if the barriers are too high, it may not make business sense to target certain markets.
The goal, of course, is to meet and prove compliance requirements in the data center in a simple, cost-effective way. With the intent to provide safety and maintain the privacy of customers, new government and industry regulations are becoming more robust, and many require organizations to implement East-West security through micro-segmentation or network segmentation inside the data center. Of course, this is easier said than done. Bandwidth and latency issues caused by hair–pinning traffic between physical appliances inhibit network segmentation and micro-segmentation at scale.
VMware NSX applies a software-based approach to firewalling that delivers the simplicity and scalability necessary to secure East-West traffic. It does this with no blind spots or gaps in coverage— Continue reading
The other guys will have you believe that more is better. You have a problem, just buy a solution and patch the hole. Security operations too siloed? Just cobble together some integrations and hope that everything works together.
VMware thinks differently. We believe that “integrated” is just another word for “complexity.” And clearly, complexity is the enemy of security.
Integrated security is bolted–on security. An example would be taking a hardware firewall and making it a blade in a data center switch. That’s what the other guys do. It makes it more convenient to deploy, but it doesn’t actually improve security.
Security always performs better—and is easier to operate—when it’s designed–in as opposed to bolted–on. At VMware, we call this intrinsic security. When we think about security, being able to build it in means you can leverage the intrinsic attributes of the infrastructure. We are not trying to take existing security solutions and integrate them. We are re-imagining how security could work.
Enterprises that want to learn how we’ve built security directly into Continue reading
Micro–segmentation is a critical component of Zero Trust. But, historically, micro-segmentation has been fraught with operational challenges and limited by platform capabilities.
Not anymore.
VMware NSX enables a new framework and firewall policy model that allows applications to define access down to the workload level. NSX does this by understanding application topologies and applying appropriate policy per workload. Creating zones in the data center where you can separate traffic by application simultaneously helps stop the spread of lateral threats, create separate development, test, and production environments, and meet certain compliance requirements.
VMworld attendees who want to learn more about how to set up micro-segmentation in their data centers should consider the following sessions:
Permit This, Deny That – Design Principles for NSX Distributed Firewall (ISNS2315D)
Micro-segmentation is something that is certainly easier said than done. Although micro-segmentation allows applications to define access down to the component level, the operation of such an environment can be daunting without structure and guidance. In this session, you’ll learn how to develop a Continue reading
In yesterday’s blog about improvements to the end-to-end Docker developer experience, I was thrilled to share how we are integrating security into image development, and to announce the launch of vulnerability scanning for images pushed to the Hub. This release is one step in our collaboration with our partner Snyk where we are integrating their security testing technology into the Docker platform. Today, I want to expand on our announcements and show you how to get started with image scanning with Snyk.
In this blog I will show you why scanning Hub images is important, how to configure the Hub pages to trigger Snyk vulnerability scans, and how to run your scans and understand the results. I will also provide suggestions incorporating vulnerability scanning into your development workflows so that you include regular security checkpoints along each step of your application deployment.
Software vulnerability scanners have been around for a while to detect vulnerabilities that hackers use for software exploitation. Traditionally security teams ran scanners after developers thought that their work was done, frequently sending code back to developers to fix known vulnerabilities. In today’s “shift-left” paradigm, scanning is applied earlier during the development and CI cycles Continue reading
Whether you have automated different domains within your business or are just getting started, creating a roadmap to automation that can be passed between teams and understood at different levels is critical to any automation strategy.
We’ve brought back the IT Decision Maker track at AnsibleFest this year after its debut in 2019, featuring sessions that help uplevel the conversation about automation, create consensus between teams and get automation goals accomplished faster.
There are a variety of sessions in the IT Decision Maker track. A few are focused on specific customer use cases of how they adopted and implemented Ansible. These tracks are great companions to our customer keynotes, including those from CarMax and PRA Health Sciences, that will dive into their Ansible implementation at a technical level. This track aims to cover the many constituents of automation within a business and how to bring the right type of teams together to extend your automation to these stakeholders.
Newcomers to AnsibleFest will get a lot out of this track, as many of the sessions are aimed at those with a beginner’s level knowledge of Ansible Automation Platform and its hosted services. Those Continue reading
The COVID-19 pandemic reminds us of the historic transition brought about by the Internet. Its place is real in our lives today and tomorrow. Celebrate, pray, play, study, work, express yourself … these verbs have been conjugated thousands of times everywhere thanks to the Internet. In Haiti, many suffer from the glaring inequality between Internet access in rural and urban areas. It is clear that tackling these problems comes down to building a safe path towards decentralization of Internet infrastructure here.
The mission of the Internet Society Haiti Chapter (ISOC Haiti) is to promote, on Haitian territory and for the benefit of all, the conditions and tools conducive to the development of an information and knowledge society – respectful of Haitian culture and values. Since 1804, our nation has raised its voice for freedom and equality so that every person may live free and in dignity, while banishing Black slavery on our land. Our motto, ‘’unity is strength,’’ reminds us that together we can achieve unimaginable things to change this nation. ISOC Haiti is aware of the challenges and believes it is time for a sustainable plan of action – and not for speech.
Poor quality and expensive Internet access Continue reading
Today, the Gerstner era of International Business Machines is over, and the Krishna era is truly beginning, as Big Blue is spinning out the system outsourcing and hosting business that gave it an annuity-like revenue stream – and something of an even keel – in some rough IT infrastructure waters for two over decades. …
IBM Jettisons Legacy Services To Focus On Hybrid Cloud was written by Timothy Prickett Morgan at The Next Platform.
In other posts on this site, I’ve talked about both infrastructure-as-code (see my posts on Terraform or my posts on Pulumi) and somewhat separately I’ve talked about Cluster API (see my posts on Cluster API). And while I’ve discussed the idea of using existing AWS infrastructure with Cluster API, in this post I wanted to try to think about how these two technologies play together, and provide some considerations for using them together.
I’ll focus here on AWS as the cloud provider/platform, but many of these considerations would also apply—in concept, at least—to other providers/platforms.
In no particular order, here are some considerations for using infrastructure-as-code and Cluster API (CAPI)—specifically, the Cluster API Provider for AWS (CAPA)—together:
additionalSecurityGroups
functionality, as I described in this blog post.If you don’t normally read IPJ, you should. Melchoir and I have an article up in the latest edition on link state in DC fabrics.