During the recent Open Networking User Group (ONUG) Meeting, there was a lot of discussion around the idea of a Full Stack Engineer. The idea of full stack professionals has been around for a few years now. Seeing this label applied to networking and network professionals seems only natural. But it’s a step in the wrong direction.
Full stack means having knowledge of the many different pieces of a given area. Full stack programmers know all about development, project management, databases, and other aspects of their environment. Likewise, full stack engineers are expected to know about the network, the servers attached to it, and the applications running on top of those servers.
Full stack is a great way to illustrate how specialized things are becoming in the industry. For years we’ve talked about how hard networking can be and how we need to make certain aspects of it easier for beginners to understand. QoS, routing protocols, and even configuration management are critical items that need to be decoded for anyone in the networking team to have a chance of success. But networking isn’t the only area where that complexity resides.
Server teams have their own jargon. Their language Continue reading
What’s interesting about this “product,” produced by Broadcom, is they are open source. We tend to think software will eat the world, but when something like this comes out in the open source space, it makes me think that if software eats the world, profit is going to take a long nosedive into nothingness. From Broadcom’s perspective this makes sense, of course; any box you buy that has a Broadcom chipset, no matter who wrapped the sheet metal around the chipset, will have some new added capability in terms of understanding the traffic flow through the network. Does this sort of thing take something essential away from the vendors who are building their products based on Broadcom, however? It seems the possibility is definitely there, but it’s going to take a lot deeper dive than what’s provided in the post above to really understand. If these interfaces are exposed simply through Continue reading
I tend to see a lot of phishing emails. The message I received this morning caught my eye. It was fairly well crafted and obviously targeted. After searching the Internet, I found that some GoDaddy customers have received something similar. This seems to be making its way around the internet to website administrators. The most curious thing to me is how someone associated the email address with a Hostmonster account.
Phishing Email Message
As can be seen above, the message read–
Your account contains more than 4035 directories and may pose a potential performance risk to the server. Please reduce the number of directories for your account to prevent possible account deactivation.
In order to prevent your account from being locked out we recommend that you create special temp directory.
The link goes to kct67<dot>ru.
Message headers also suggest a Russian origin–
Received: by 10.140.27.139 with SMTP id 11csp1084546qgx; Tue, 17 Nov 2015 20:25:39 -0800 (PST) X-Received: by 10.25.161.211 with SMTP id k202mr1408853lfe.161.1447820739327; Tue, 17 Nov 2015 20:25:39 -0800 (PST) Return-Path: <[email protected]> Received: from bmx1.z8.ru (bmx1.z8.ru. [80.93.62.39]) by mx.google.com with ESMTPS Continue reading
Republished from Corero DDoS Blog:
The Internet has a very long history of utilizing mechanisms that may breathe new life into older technologies, stretching it out so that newer technologies may be delayed or obviated altogether. IPv4 addressing, and the well known depletion associated with it, is one such area that has seen a plethora of mechanisms employed in order to give it more shelf life.
In the early 90s, the IETF gave us Classless Inter-Domain Routing (CIDR), which dramatically slowed the growth of global Internet routing tables and delayed the inevitable IPv4 address depletion. Later came DHCP, another protocol which assisted via the use of short term allocation of addresses which would be given back to the provider's pool after use. In 1996, the IETF was back at it again, creating RFC 1918 private addressing, so that networks could utilize private addresses that didn't come from the global pool. Utilizing private address space gave network operators a much larger pool to use internally than would otherwise have been available if utilizing globally assigned address space -- but if they wanted to connect to the global Internet, they needed something to translate those addresses. This is what necessitated the development of Network Address Translation (NAT).
NAT Continue reading
Internet on Demand is next, AT&T says at the GEN15 conference.
One of my readers read the Ars Technica article on ads communicating with other devices via ultrasound and wondered whether something similar could be done for IP.
Not surprisingly, someone already did it. A quick google search found this tutorial which explains how to run IP stack over Gnuradio (at speeds that were last experienced with dial-up modems 30 years ago).
In Playing in the Lab: DMVPN and Per-Tunnel QoS we looked at DMVPN per-tunnel QoS. We looked at how to configure it…. how the vendor private extensions in RFC2332 are used in the NHRP registration request… and how to see,... Read More ›
The post Fun in the Lab: Troubleshooting DMVPN Per-Tunnel QoS appeared first on Networking with FISH.
Another notch for Elliott Management, as Citrix vows to refocus on application delivery.
CEO Napolitano says his company is defining its customer base as 'cloud architects.'