Happy birthday, Cisco ACI! To celebrate, Cisco is releasing posts about how ACI is automating the network and eliminating gaps between application owners’ requirements and networking constructs.
If you are very comfortable with math and modeling Dr. Neil Gunther's Universal Scalability Law is a powerful way of predicting system performance and whittling down those bottlenecks. If not, the USL can be hard to wrap your head around.
There's a free eBook for that. Performance and scalability expert Baron Schwartz, founder of VividCortex, has written a wonderful exploration of scalability truths using the USL as a lens: Practical Scalablility Analysis with the Universal Scalability Law.
As a sample of what you'll learn, here are some of the key takeaways from the book:
VMware NSX has been around for more than two years now, and in that time software-defined networking and network virtualization have become inextricably integrated into modern data center architecture. It seems like an inconceivable amount of progress has been made. But the reality is that we’re only at the beginning of this journey.
The transformation of networking from a hardware industry into a software industry is having a profound impact on services, security, and IT organizations around the world, according to VMware’s Chief Technology Strategy Officer for Networking, Guido Appenzeller.
“I’ve never seen growth like what we’ve found with NSX,” he says. “Networking is going through a huge transition.” Continue reading
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
While cloud computing has proven to be beneficial for many organizations, IT departments have been slow to trust the cloud for business-critical Microsoft SQL Server workloads. One of their primary concerns is the availability of their SQL Server, because traditional shared-storage, high-availability clustering configurations are not practical or affordable in the cloud.
Amazon Web Services and Microsoft Azure both offer service level agreements that guarantee 99.95% uptime (fewer than 4.38 hours of downtime per year) of IaaS servers. Both SLAs require deployment in two or more AWS Availability Zones or Azure Fault Domains respectively. Availability Zones and Fault Domains enable the ability to run instances in locations that are physically independent of each other with separate compute, network, storage or power source for full redundancy. AWS has two or three Availability Zones per region, and Azure offers up to 3 Fault Domains per “Availability Set.”
To read this article in full or to leave a comment, please click here
During the recent Open Networking User Group (ONUG) Meeting, there was a lot of discussion around the idea of a Full Stack Engineer. The idea of full stack professionals has been around for a few years now. Seeing this label applied to networking and network professionals seems only natural. But it’s a step in the wrong direction.
Full stack means having knowledge of the many different pieces of a given area. Full stack programmers know all about development, project management, databases, and other aspects of their environment. Likewise, full stack engineers are expected to know about the network, the servers attached to it, and the applications running on top of those servers.
Full stack is a great way to illustrate how specialized things are becoming in the industry. For years we’ve talked about how hard networking can be and how we need to make certain aspects of it easier for beginners to understand. QoS, routing protocols, and even configuration management are critical items that need to be decoded for anyone in the networking team to have a chance of success. But networking isn’t the only area where that complexity resides.
Server teams have their own jargon. Their language Continue reading
What’s interesting about this “product,” produced by Broadcom, is they are open source. We tend to think software will eat the world, but when something like this comes out in the open source space, it makes me think that if software eats the world, profit is going to take a long nosedive into nothingness. From Broadcom’s perspective this makes sense, of course; any box you buy that has a Broadcom chipset, no matter who wrapped the sheet metal around the chipset, will have some new added capability in terms of understanding the traffic flow through the network. Does this sort of thing take something essential away from the vendors who are building their products based on Broadcom, however? It seems the possibility is definitely there, but it’s going to take a lot deeper dive than what’s provided in the post above to really understand. If these interfaces are exposed simply through Continue reading
I tend to see a lot of phishing emails. The message I received this morning caught my eye. It was fairly well crafted and obviously targeted. After searching the Internet, I found that some GoDaddy customers have received something similar. This seems to be making its way around the internet to website administrators. The most curious thing to me is how someone associated the email address with a Hostmonster account.
Phishing Email Message
As can be seen above, the message read–
Your account contains more than 4035 directories and may pose a potential performance risk to the server. Please reduce the number of directories for your account to prevent possible account deactivation.
In order to prevent your account from being locked out we recommend that you create special temp directory.
The link goes to kct67<dot>ru.
Message headers also suggest a Russian origin–
Received: by 10.140.27.139 with SMTP id 11csp1084546qgx; Tue, 17 Nov 2015 20:25:39 -0800 (PST) X-Received: by 10.25.161.211 with SMTP id k202mr1408853lfe.161.1447820739327; Tue, 17 Nov 2015 20:25:39 -0800 (PST) Return-Path: <[email protected]> Received: from bmx1.z8.ru (bmx1.z8.ru. [80.93.62.39]) by mx.google.com with ESMTPS Continue reading
Republished from Corero DDoS Blog:
The Internet has a very long history of utilizing mechanisms that may breathe new life into older technologies, stretching it out so that newer technologies may be delayed or obviated altogether. IPv4 addressing, and the well known depletion associated with it, is one such area that has seen a plethora of mechanisms employed in order to give it more shelf life.
In the early 90s, the IETF gave us Classless Inter-Domain Routing (CIDR), which dramatically slowed the growth of global Internet routing tables and delayed the inevitable IPv4 address depletion. Later came DHCP, another protocol which assisted via the use of short term allocation of addresses which would be given back to the provider's pool after use. In 1996, the IETF was back at it again, creating RFC 1918 private addressing, so that networks could utilize private addresses that didn't come from the global pool. Utilizing private address space gave network operators a much larger pool to use internally than would otherwise have been available if utilizing globally assigned address space -- but if they wanted to connect to the global Internet, they needed something to translate those addresses. This is what necessitated the development of Network Address Translation (NAT).
NAT Continue reading
Internet on Demand is next, AT&T says at the GEN15 conference.