AT&T’s ongoing network virtualization effort, specifically the amount of core network...
If you're looking for a way to bring IPv6 into your environment, the WLAN may be your best bet. Find out why in the latest episode of IPv6 Buzz with guest Jeffry Handal. Jeffry cut his teeth with an early v6 deployment on the wireless network of Louisiana State University (LSU). This WLAN serves 40,000 users and over 100,000 devices. He shares his experiences and talks about how vendor adoption of v6 had advanced since that deployment.
The post IPv6 Buzz 042: Why Wireless Is A Smart Place To Start With IPv6 appeared first on Packet Pushers.
Security and encryption experts from around the world are calling on the Indian Ministry of Electronics and Information Technology (MeiTy) to reconsider proposed amendments to intermediary liability rules that could weaken security and limit the use of strong encryption on the Internet. Coordinated by the Internet Society, nearly thirty computer security and cryptography experts from around the world signed “Open Letter: Concerns with Amendments to India’s Information Technology (Intermediaries Guidelines) Rules under the Information Technology Act.”
MeiTy is revising proposed amendments to the Information Technology (Intermediaries Guidelines) Rules. The proposed amendments would require intermediaries, like content platforms, Internet service providers, cybercafés, and others, to abide by strict, onerous requirements in order to not be held liable for the content sent or posted by their users. Freedom from intermediary liability is an important aspect of communications over the Internet. Without it, people cannot build and maintain platforms and services that have the ability to easily handle to billions of people.
The letter highlights concerns with these new rules, specifically requirements that intermediaries monitor and filter their users’ content. As these security experts state, “by tying intermediaries’ protection from liability to their ability to monitor communications being sent across their platforms or systems, the amendments would limit Continue reading
It's amazing how heaping layers of complexity (see also: SDN or intent-based whatever) manages to destroy performance faster than Moore's law delivers it. The computer with lowest keyboard-to-screen latency was (supposedly) Apple II built in 1983, with modern Linux having keyboard-to-screen RTT matching the transatlantic links.
No surprise there: Linux has been getting slower with every kernel release and it takes an enormous effort to fine-tune it (assuming you know what to tune). Keep that in mind the next time someone with a hefty PPT slide deck will tell you to build a "provider cloud" with packet forwarding done on Linux VMs. You can make that work, and smart people made that work, but you might not have the resources to replicate their feat.
The proof-of-concept aims to test multi-access edge computing for the global distribution of...
Financial networks require high speeds and solid security. Here's how SD-branch meets the needs of...
The Falco project joins 14 other Incubating projects as the first and, so far, only security...
“I think it really is time for Congress to think about whether we should do something more...
Anana wanted to automated infrastructure to unify its two data centers and provide more agile...
This was originally published on Perf Planet's 2019 Web Performance Calendar.
QUIC, the new Internet transport protocol designed to accelerate HTTP traffic, is delivered on top of UDP datagrams, to ease deployment and avoid interference from network appliances that drop packets from unknown protocols. This also allows QUIC implementations to live in user-space, so that, for example, browsers will be able to implement new protocol features and ship them to their users without having to wait for operating systems updates.
But while a lot of work has gone into optimizing TCP implementations as much as possible over the years, including building offloading capabilities in both software (like in operating systems) and hardware (like in network interfaces), UDP hasn't received quite as much attention as TCP, which puts QUIC at a disadvantage. In this post we'll look at a few tricks that help mitigate this disadvantage for UDP, and by association QUIC.
For the purpose of this blog post we will only be concentrating on measuring throughput of QUIC connections, which, while necessary, is not enough to paint an accurate overall picture of the performance of the QUIC protocol (or its implementations) as a whole.
The client used Continue reading
The Open Mobile Evolved Core (OMEC) platform supports connections into the carrier’s existing...
In the lead up to last month’s Internet Engineering Task Force meeting in Singapore, IETF 106, the India Internet Engineering Society (IIESoc) held its third annual Connections conference in Kolkata, India.
This pre-IETF event aims to increase participation in IETF discussions from the Asia-Pacific region, specifically India.
Like the years before it, this edition of Connections had four technology tracks across two days; the themes of which – IoT, security, routing, and research – were chosen with the audience and location in mind, given Kolkata is a major research hub in India. As such, there was record participation, with a large number of local students attending the event, many of whom were excited to learn about, discuss and contribute to the work being considered in the IETF and how they can contribute to this group.
The Importance of Being Involved
A feature of past Connections events has been the participation of IETF working group chairs and RFC contributors attending en route to the impending IETF conference. This year was no different and we were grateful to have former IETF chair, Fred Baker, who presented the keynote and shared his journey at the IETF during the meet and greet session.
The Continue reading
While performing end-of-year clean up on lab infrastructure, I discovered my 8 disk Synology array with about 22TB of usable storage was almost out of space. Really? What had I filled all that space with?
After a lot of digging around, I found that I had enabled the Recycle Bin on one or more Shared Folders, but had NOT created a Recycle Bin emptying schedule.
This means that over several years of shoving lots of data through the array, the various Recycle Bins attached to various Shared Folders had loaded up with cruft. I figured this out running a Storage Analyzer report.
To get my space back, the solution was to empty the Recycle Bin. One way to do that is to edit the properties of a Shared Folder and click “Empty Recycle Bin”. You’ll get a sense of relief as Storage Manager shows available space growing as the Synology removes however many million files you’ve been composting for however long.
However, I like to solve problems permanently. No one has time to manually empty recycle bins on a disk array in a distant rack. Manually. Like a savage. Yuck.
Automating a recycle bin task on a Synology box is Continue reading
On today's Day Two Cloud podcast, Rob Hirschfeld joins us to examine the challenges of working in edge environments. Many of those challenges include the infrastructure itself: how to provision, configure, and operate equipment in remote locations, how to ensure logical and physical security, and how to manage it all remotely.
The post Day Two Cloud 030: The Gnarly Challenges Of Edge Computing appeared first on Packet Pushers.
It’s been a while but we’re back and it feels great. In this episode I talk about the status of the show, where the heck I’ve been, and where we are headed. I also share my personal story on burnout and some of the lessons I learned along the way.
In the episode I mention the fantastic Denise Fishburn and a video she did on burnout that helped me realize what I was dealing with. That video is embedded below.
Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/
The post We’re Back appeared first on Network Collective.
This article was originally published as part of Perf Planet's 2019 Web Performance Calendar.
Have you ever wanted to quickly test a new performance idea, or see if the latest performance wisdom is beneficial to your site? As web performance appears to be a stochastic process, it is really important to be able to iterate quickly and review the effects of different experiments. The challenge is to be able to arbitrarily change requests and responses without the overhead of setting up another internet facing server. This can be straightforward to implement by combining two of my favourite technologies : WebPageTest and Cloudflare Workers. Pat Meenan sums this up with the following slide from a recent getting the most of WebPageTest presentation:
So what is Cloudflare Workers and why is it ideally suited to easy prototyping of optimizations?
From the documentation :
Cloudflare Workers provides a lightweight JavaScript execution environment that allows developers to augment existing applications or create entirely new ones without configuring or maintaining infrastructure.A Cloudflare Worker is a programmable proxy which brings the simplicity and flexibility of the Service Workers event-based fetch API from the browser to the edge. This allows a worker to Continue reading