One of the least loved areas of any data center network is monitoring. This is ironic because at its core, the network has two goals: 1) Get packets from A to B 2) Make sure packets got from A to B. It is not uncommon in the deployments I’ve seen for the monitoring budget to be effectively $0, and generally, an organization’s budget also reflects their priorities. Despite spending thousands, or even hundreds of thousands, of dollars on networking equipment to facilitate goal #1 from above, there is often little money, thought and time spent in pursuit of Goal #2. In the next several paragraphs I’ll go into some basic data center network monitoring best practices that will work with any budget.
It is not hard to see why monitoring the data center network can be a daunting task. Monitoring your network, just like designing your network, takes a conscious plan of action. Tooling in the monitoring space today is highly fragmented with over 100+ “best of breed” tools that each accommodate a specific use case. Just evaluating all the tools would be a full time job. A recent Big Panda Report and their video overview of it (38 mins) Continue reading
MENOG 17 took place at the Crowne Plaza Hotel, Muscat, Oman on 19-20 April 2017 under the patronage of .om Domain Names Administration and the cooperation of RIPE NCC, the Internet Society, and OmanTel.
This year marks the 10th anniversary of the Middle East Network Operators Group (MENOG), a community of technical professionals such as Internet service providers, telecom operators, mobile operators, content providers, and regulators. The countries represented in MENOG are Bahrain, Iran, Iraq, Jordan, Kuwait, KSA, Lebanon, Oman, Palestine, Qatar, Syria, Turkey, UAE, and Yemen.
I’m sure this has been done before, but you do hear stories from time to time where someone will either drop or increase their DNS TTL’s and either see a massive difference, or none at all.
A lot of pro
The post Worth Reading: Learning blockchains appeared first on rule 11 reader.
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
More businesses are embarking on data lake initiatives than ever before, yet Gartnerpredicts 90% of deployed data lakes will be useless through 2018 as they’re overwhelmed with data with no clear use cases. Organizations may see the value of having a single repository to house all enterprise data, but lack the resources, knowledge and processes to ensure the data in the lake is of good quality and actually useful to the business. To truly leverage your organization’s data lake to derive real, actionable insights, there are five best practices to keep in mind:
To read this article in full or to leave a comment, please click here
One of my favorite technology catch phrases is “all technology fails”, but when thinking about the network that thought becomes a very scary one. Yes, while all technology does fail, you will always do your best to not have the network be one of those. The concept of healing networks from a conceptual standpoint (the will to want to detect an issue and fix it as soon as possible) is not a new one, as network monitoring is always at the front of any network engineer's mind. We are just fortunate in this day and age to be able to take advantage of newer tools that provide better solutions. Ansible to the rescue!
If you are attending the Red Hat Summit, please make sure not to miss the Discovery Zone session entitled “Self-Healing Networks with Ansible” on Thursday, May 4th at 10:15AM.
In this presentation we will cover topics, such as:
At the end of this session, Continue reading