Amazon, the company that formerly started as an online book store, has become one of the most influential companies around the globe. Over the last decade, the company has managed to expand its dealership by stepping into different sectors. Today, Amazon is recognized, as one of the leading e-commerce stores that have outshined Walmart, thus taking away its title as Americas most valued retailer. From creating a mega e-market to boosting digital streaming and AI, it has positioned itself as one of the most prominent profit-making megacorps, followed by Facebook and Google.
The speedy delivery system and product credibility are two factors that have won amazon roughly around hundred million followers universally. With the advancement in technology and high availability of online marketplaces, people find it convenient to order stuff online than buying it physically. In this case, Amazon has become the most preferred and trusted e-commerce site among the masses.
To have full access on the website, like any other user you must have created an Amazon account to make your purchases. Also, upon quick package delivery and blue ribbon product quality, it is a common practice to do the five-star rating. Continue reading
The 2020 results of the NetDevOps Survey are out! This was the third time the survey was conducted and was targeted to the network automation community. But first, a huge shout out to the team that led this effort again (Damien Garros and Francois Caen). The survey was 100% community-driven, and I thank them for allowing me to be a part of the team, and to provide feedback to existing and new questions.
This survey is a good representation of how network operators and network engineers are utilizing automation to get their jobs done, but largely without management buy-in or a proactive automation strategy. This blog is largely my hot take on the results, as seen through the lens of my history at Red Hat as an Ansible Product Manager helping to get network automation as an official commercial use case off the ground. I’m going to compare and contrast the survey questions and results between the most recent NetDevOps survey and the Enterprise Management Associates (EMA) Enterprise Network Automation for 2020 and Beyond results that Red Hat sponsored back in 2019.
Here are the main ideas I gleaned:
Despite being the second-most populated country in Latin America, with significant Internet consumption, by the end of 2019 Mexico only had one established Internet exchange point (IXP) – CITI, in three locations (Mexico City, Querétaro, and Tultitlán). In comparison, Argentina and Brazil have more than 30 points each.
In Mexico’s southeastern region – which has the country’s highest poverty rates and lowest connectivity – there were none. This prompted a committed group of people in the State of Yucatán to set out to create an IXP in 2014.
Their efforts intensified in April 2018, with the signing of the founding act for the Internet Exchange Services Yucatán (IXSY), a nonprofit association to administer the node in Yucatán.
In May 2018, the First National IXP Forum was organized. There, IXSY gained the support of Yucatan’s state government. But in July, that government lost the state elections, putting the project on pause.
Still, the new government didn’t take long to see the project’s relevance, says Carmen Denis Polanco, director of the IXSY. “It is beautiful and valuable that it did not become a political issue, but something that was important for the state. A new team of people was formed that could Continue reading
It seems like a question a child would ask: “Why are things the way they are?” …
The World Has Changed – Why Haven’t Database Designs? was written by Avishai Ish-Shalom at The Next Platform.
Memory analysis plays a key role in identifying sophisticated malware in both user space and kernel space, as modern threats are often file-less, operating without creating a file system artifact.
The most effective approach to the detection of these sophisticated malware components is to install on the protected operating system an agent that continuously monitors the OS memory for signs of compromise. However, this approach has a number of drawbacks. First, the agent introduces a constant overhead in the monitored OS — caused by both the resources used by the agent process (e.g., CPU, memory) and the instrumentation used to capture relevant events (e.g., API hooking). Second, a malware sample can detect the presence of an agent and attempt to either disable the agent or evade detection. Third, depending on how it is deployed, the agent not have access to specific portions of the user-space and kernel-space memory, and, as a consequence, may miss important evidence of a compromise. Finally, deploying, maintaining, and updating agents on every endpoint can be challenging, especially in heterogeneous deployments where multiple versions of different operating systems and architectures coexist.
A complementary approach to the detection of Continue reading
Today's IPv6 Buzz guest, Doug Montgomery, is manager of Internet and scalable systems research at the National Institute of Standards and Technology (NIST), which has been critical in helping standardize IPv6 interoperability standards and testing.
The post IPv6 Buzz 072: NIST And Testing IPv6 Interoperability appeared first on Packet Pushers.
Today we're excited to introduce Page Shield, a client-side security product customers can use to detect attacks in end-user browsers.
Starting in 2015, a hacker group named Magecart stole payment credentials from online stores by infecting third-party dependencies with malicious code. The infected code would be requested by end-user browsers, where it would execute and access user information on the web page. After grabbing the information, the infected code would send it to the hackers, where it would be resold or used to launch additional attacks such as credit card fraud and identity theft.
Since then, other targets of such supply chain attacks have included Ticketmaster, Newegg, British Airways, and more. The British Airways attack stemmed from the compromise of one of their self-hosted JavaScript files, exposing nearly 500,000 customers’ data to hackers. The attack resulted in GDPR fines and the largest class-action privacy suit in UK history. In total, millions of users have been affected by these attacks.
Writing secure code within an organization is challenging enough without having to worry about third-party vendors. Many SaaS platforms serve third-party code to millions of sites, meaning a single compromise could have devastating results. Page Shield helps customers monitor these potential Continue reading
Border Gateway Protocol (BGP) route leaks and hijacks can ruin your day — BGP is insecure by design, and incorrect routing information spreading across the Internet can be incredibly disruptive and dangerous to the normal functioning of customer networks, and the Internet at large. Today, we're excited to announce Route Leak Detection, a new network alerting feature that tells customers when a prefix they own that is onboarded to Cloudflare is being leaked, i.e., advertised by an unauthorized party. Route Leak Detection helps protect your routes on the Internet: it tells you when your traffic is going places it’s not supposed to go, which is an indicator of a possible attack, and reduces time to mitigate leaks by arming you with timely information.
In this blog, we will explain what route leaks are, how Cloudflare Route Leak Detection works, and what we are doing to help protect the Internet from route leaks.
A route leak occurs when a network on the Internet tells the rest of the world to route traffic through their network, when the traffic isn’t supposed to go there normally. A great example of this Continue reading
When I was complaining about the speed (or lack thereof) of Azure orchestration system, someone replied “I tried to do $somethingComplicated on AWS and it also took forever”
Following the “opinions are great, data is better” mantra (as opposed to “never let facts get in the way of a good story” supposedly practiced by some podcasters), I decided to do a short experiment: create a very similar environment with Azure and AWS.
I took simple Terraform deployment configuration for AWS and Azure. Both included a virtual network, two subnets, a route table, a packet filter, and a VM with public IP address. Here are the observed times:
When I was complaining about the speed (or lack thereof) of Azure orchestration system, someone replied “I tried to do $somethingComplicated on AWS and it also took forever”
Following the “opinions are great, data is better” mantra (as opposed to “never let facts get in the way of a good story” supposedly practiced by some podcasters), I decided to do a short experiment: create a very similar environment with Azure and AWS.
I took simple Terraform deployment configuration for AWS and Azure. Both included a virtual network, two subnets, a route table, a packet filter, and a VM with public IP address. Here are the observed times: