If the datacenter has been taken over by InfiniBand, as was originally intended back in the late 1990s, then PCI-Express peripheral buses and certainly PCI-Express switching – and maybe even Ethernet switching itself – would not have been necessary at all. …
Hosts Brandon and Derick have the honor of interviewing Bill Krause and hearing some fascinating stories about the early days of Silicon Valley, including the origins of HP's first computer division, and how Bill (along with previous podcast guest Bob Metcalfe) took Ethernet from zero to one million ports ahead of their already-ambitious timeline.
Bill is a tech luminary, having served as the CEO and President, and then Board Chairman, of 3Com, growing the business from a VC-backed startup to a publicly traded $1B company with global operations. Prior to 3Com, Bill was the GM of HP's first personal computer division, and grew that business exponentially as well. He's currently a board partner with Andreessen Horowitz as well as Chairman of the Board at Veritas, and he also serves on the boards of CommScope, SmartCar, and Forward Networks. Bill is a noted philanthropist; he and his wife Gay Krause have funded many national and local programs focusing on education, leadership, and ethics.
Tune in and join us to hear Bill's amazing stories, his lessons learned, and his profound advice to young entrepreneurs.
Deploying and operating applications in multiple public clouds is critical to many IT leaders, and networking software can help.Migrating applications to cloud infrastructure requires scale, performance, and, importantly, automation. But achieving them all can be challenging due to limited visibility into that infrastructure and the fact that each IaaS platform has proprietary controls for networking and security that can make multicloud operations highly manual and therefore time consuming.[Get regularly scheduled insights by signing up for Network World newsletters.]
As a result, IT teams can be challenged to quickly resolve application performance issues, protect against external attacks and reduce costs. Their goal should be to combine the agility of IaaS resources with the security, manageability and control of their physical network.To read this article in full, please click here
We are in the midst of numerous foundational technological shifts in communications infrastructure that represents a generational opportunity for consumers, businesses, and providers alike. …
Everyone who’s heard me talk about container networking knows I think it’s a bit of a disaster. This is what you get, though, when someone says “that’s really complex, I can discard the years of experience others have in designing this sort of thing and build something a lot simpler…” The result is usually something that’s more complex. Alex Pollitt joins Tom Ammon and I to discuss container networking, and new options that do container networking right.
The battle for HPC centers and national labs is underway among the leading AI chip startups in the high-end datacenter space (Graphcore, Cerebras, and SambaNova in particular). …
Day Two Cloud podcast co-host Ned Bellavance asks Envoy creator Matt Klein about the tipping point for certain tech. When do you need an API gateway? Egress control? A service mesh? Matt is a “keep it as simple as you can for as long as you can” sort of guy. Why adopt technology that doesn’t […]
Computing power is a vital part of modern life. Should access to that power be more equitably distributed? Is there a role for a public-utility-style cloud that could make computing more cost-effective and accessible to a broader number of constituencies? These are the starting questions for today's episode of Day Two Cloud. Our guest is Dwayne Monroe, a cloud architect, consultant, and author.
Computing power is a vital part of modern life. Should access to that power be more equitably distributed? Is there a role for a public-utility-style cloud that could make computing more cost-effective and accessible to a broader number of constituencies? These are the starting questions for today's episode of Day Two Cloud. Our guest is Dwayne Monroe, a cloud architect, consultant, and author.
Tetrate sponsored this post.
Petr McAllister
Petr is an IT Professional with more than 20+ years of international experience and Master’s Degree in Computer Science. He is a technologist at Tetrate.
The Istio service mesh comes with its own ingress, but we see customers with requirements to use a non-Istio ingress all the time. Previously, we’ve covered Traefik ingress. With some slight adjustments to the approach we suggested previously, we at Tetrate learned how to implement Traefik as the ingress gateway to your Istio Service Mesh. This article will show you how.
The flow of traffic is shown on the diagram below. As soon as requests arrive at the service mesh from the Traefik ingress, Istio has the ability to apply security, observability and traffic steering rules to the request:
Incoming traffic bypasses the Istio sidecar and arrives directly at Traefik, so the requests terminate at the Traefik ingress.
Traefik uses the IngressRoute config to rewrite the “Host” header to match the destination, and forwards the request to the targeted service, which is a several step process:
Requests exiting Traefik Ingress are redirected to the Istio sidecar Continue reading
In previous posts we wrote about our configuration distribution system Quicksilver and the story of migrating its storage engine to RocksDB. This solution proved to be fast, resilient and stable. During the migration process, we noticed that Quicksilver memory consumption was unexpectedly high. After our investigation we found out that the root cause was a default memory allocator that we used. Switching memory allocator improved service memory consumption by almost three times.
Unexpected memory growth
After migrating to RocksDB, the memory used by the application increased significantly. Also, the way memory was growing over time looked suspicious. It was around 15GB immediately after start and then was steadily growing for multiple days, until stabilizing at around 30GB. Below, you can see a memory consumption increase after migrating one of our test instances to RocksDB.
We started our investigation with heap profiling with the assumption that we had a memory leak somewhere and found that heap size was almost three times less than the RSS value reported by the operating system. So, if our application does not actually use all this memory, it means that memory is ‘lost’ somewhere between the system and our application, which points to possible problems with Continue reading
With the predicted growth in the data-center market comes a concurrent need for more staff. According to a report from the Uptime Institute, the number of staff needed to run the world's data centers will grow from around two million in 2019 to nearly 2.3 million by 2025.This estimate covers more than 230 specialist job roles for different types and sizes of data centers, with varying criticality requirements, from design through operation, and across all global regions.Already the industry is bedeviled by staffing shortages. Fifty percent of those surveyed by Uptime Institute said they were currently experiencing difficulties finding candidates for open positions, up from 38% in 2018.To read this article in full, please click here
A long-time reader sent me a series of questions about the impact of WAN partitioning in case of an SDN-based network spanning multiple locations after watching the Architectures part of Data Center Fabrics webinar. He therefore focused on the specific case of centralized control plane (read: an equivalent of a stackable switch) with distributed controller cluster (read: switch stack spread across multiple locations).
SDN controllers spread across multiple data centers
A long-time reader sent me a series of questions about the impact of WAN partitioning in case of an SDN-based network spanning multiple locations after watching the Architectures part of Data Center Fabrics webinar. He therefore focused on the specific case of centralized control plane (read: an equivalent of a stackable switch) with distributed controller cluster (read: switch stack spread across multiple locations).
SDN controllers spread across multiple data centers
I have a cron job that renews an SSL
certificate from Let's
Encrypt, and then restarts the smtpd daemon so that the new certificate is
picked up. This all works fine--as proven by both the presence of a new, valid
cert on disk, and smtpd successfully restarting--but cron never sends an email
with the output of the job. What gives?