AT&T Clarifies Perplexing, Ambiguous Network Virtualization Goal
AT&T’s ongoing network virtualization effort, specifically the amount of core network...
AT&T’s ongoing network virtualization effort, specifically the amount of core network...
If we could sum up the near-term future of high performance computing in a single phrase, it would be more of the same and then some. …
HPC In 2020: AI Is No Longer An Experiment was written by Michael Feldman at The Next Platform.
If you're looking for a way to bring IPv6 into your environment, the WLAN may be your best bet. Find out why in the latest episode of IPv6 Buzz with guest Jeffry Handal. Jeffry cut his teeth with an early v6 deployment on the wireless network of Louisiana State University (LSU). This WLAN serves 40,000 users and over 100,000 devices. He shares his experiences and talks about how vendor adoption of v6 had advanced since that deployment.
The post IPv6 Buzz 042: Why Wireless Is A Smart Place To Start With IPv6 appeared first on Packet Pushers.
Security and encryption experts from around the world are calling on the Indian Ministry of Electronics and Information Technology (MeiTy) to reconsider proposed amendments to intermediary liability rules that could weaken security and limit the use of strong encryption on the Internet. Coordinated by the Internet Society, nearly thirty computer security and cryptography experts from around the world signed “Open Letter: Concerns with Amendments to India’s Information Technology (Intermediaries Guidelines) Rules under the Information Technology Act.”
MeiTy is revising proposed amendments to the Information Technology (Intermediaries Guidelines) Rules. The proposed amendments would require intermediaries, like content platforms, Internet service providers, cybercafés, and others, to abide by strict, onerous requirements in order to not be held liable for the content sent or posted by their users. Freedom from intermediary liability is an important aspect of communications over the Internet. Without it, people cannot build and maintain platforms and services that have the ability to easily handle to billions of people.
The letter highlights concerns with these new rules, specifically requirements that intermediaries monitor and filter their users’ content. As these security experts state, “by tying intermediaries’ protection from liability to their ability to monitor communications being sent across their platforms or systems, the amendments would limit Continue reading
Suppose you have a workflow set up in Red Hat Ansible Tower with several steps and needed another user to view and approve some or all of the nodes in the workflow. Or maybe a job is running inside of a workflow but it should be viewed and approved within a specific time limit, or else get canceled automatically? Perhaps it would be useful to be able to see how a job failed before something like a cleanup task gets set off? It is now possible to insert a step in between any job template or workflow within that workflow in order to achieve these objectives.
Table of Contents
A New Feature for Better Oversight and More User Input
How to Add Approval Nodes to Workflows
What Happens When Something Needs Approval?
Approval-Specific Role-Based Access Controls
The Workflow Approval Node feature has been available in Ansible Tower since the release of version 3.6.0 on November 13, 2019. In order to visually compare the additional functionality, examine the before and after examples of a workflow Continue reading
It's amazing how heaping layers of complexity (see also: SDN or intent-based whatever) manages to destroy performance faster than Moore's law delivers it. The computer with lowest keyboard-to-screen latency was (supposedly) Apple II built in 1983, with modern Linux having keyboard-to-screen RTT matching the transatlantic links.
No surprise there: Linux has been getting slower with every kernel release and it takes an enormous effort to fine-tune it (assuming you know what to tune). Keep that in mind the next time someone with a hefty PPT slide deck will tell you to build a "provider cloud" with packet forwarding done on Linux VMs. You can make that work, and smart people made that work, but you might not have the resources to replicate their feat.
The proof-of-concept aims to test multi-access edge computing for the global distribution of...
Financial networks require high speeds and solid security. Here's how SD-branch meets the needs of...
The Falco project joins 14 other Incubating projects as the first and, so far, only security...
“I think it really is time for Congress to think about whether we should do something more...
Anana wanted to automated infrastructure to unify its two data centers and provide more agile...
When it comes to connection pooling in the PostgreSQL world, PgBouncer is probably the most popular option. It’s a very simple utility that does exactly one thing – it sits between the database and the clients and speaks the PostgreSQL protocol, emulating a PostgreSQL server. A client connects to PgBouncer with the exact same syntax it would use when connecting directly to PostgreSQL – PgBouncer is essentially invisible.
This was originally published on Perf Planet's 2019 Web Performance Calendar.
QUIC, the new Internet transport protocol designed to accelerate HTTP traffic, is delivered on top of UDP datagrams, to ease deployment and avoid interference from network appliances that drop packets from unknown protocols. This also allows QUIC implementations to live in user-space, so that, for example, browsers will be able to implement new protocol features and ship them to their users without having to wait for operating systems updates.
But while a lot of work has gone into optimizing TCP implementations as much as possible over the years, including building offloading capabilities in both software (like in operating systems) and hardware (like in network interfaces), UDP hasn't received quite as much attention as TCP, which puts QUIC at a disadvantage. In this post we'll look at a few tricks that help mitigate this disadvantage for UDP, and by association QUIC.
For the purpose of this blog post we will only be concentrating on measuring throughput of QUIC connections, which, while necessary, is not enough to paint an accurate overall picture of the performance of the QUIC protocol (or its implementations) as a whole.
The client used Continue reading
One of the benefits of the public cloud is that it allows HPC centers to experiment and push the limits of scalability in a way they could never do if they had to requisition, budget, and install machinery on premises. …
Urgent HPC Can Burst Affordably To The Cloud was written by Timothy Prickett Morgan at The Next Platform.
The Open Mobile Evolved Core (OMEC) platform supports connections into the carrier’s existing...
In the lead up to last month’s Internet Engineering Task Force meeting in Singapore, IETF 106, the India Internet Engineering Society (IIESoc) held its third annual Connections conference in Kolkata, India.
This pre-IETF event aims to increase participation in IETF discussions from the Asia-Pacific region, specifically India.
Like the years before it, this edition of Connections had four technology tracks across two days; the themes of which – IoT, security, routing, and research – were chosen with the audience and location in mind, given Kolkata is a major research hub in India. As such, there was record participation, with a large number of local students attending the event, many of whom were excited to learn about, discuss and contribute to the work being considered in the IETF and how they can contribute to this group.
The Importance of Being Involved
A feature of past Connections events has been the participation of IETF working group chairs and RFC contributors attending en route to the impending IETF conference. This year was no different and we were grateful to have former IETF chair, Fred Baker, who presented the keynote and shared his journey at the IETF during the meet and greet session.
The Continue reading
While performing end-of-year clean up on lab infrastructure, I discovered my 8 disk Synology array with about 22TB of usable storage was almost out of space. Really? What had I filled all that space with?
After a lot of digging around, I found that I had enabled the Recycle Bin on one or more Shared Folders, but had NOT created a Recycle Bin emptying schedule.
This means that over several years of shoving lots of data through the array, the various Recycle Bins attached to various Shared Folders had loaded up with cruft. I figured this out running a Storage Analyzer report.
To get my space back, the solution was to empty the Recycle Bin. One way to do that is to edit the properties of a Shared Folder and click “Empty Recycle Bin”. You’ll get a sense of relief as Storage Manager shows available space growing as the Synology removes however many million files you’ve been composting for however long.
However, I like to solve problems permanently. No one has time to manually empty recycle bins on a disk array in a distant rack. Manually. Like a savage. Yuck.
Automating a recycle bin task on a Synology box is Continue reading