Trigger warning for Check Point haters: I’m about to say nice things about Check Point.
Continuing the recent theme of Check Point-related posts, I’d like to give Check Point credit for once. SmartLog is what I always wanted from Tracker/Log Viewer, and they’re not even charging me extra for it. Shocking, I know.
15-20 years ago, Check Point was well ahead of the competition when it came to viewing firewall logs. “Log Viewer” or “SmartView Tracker,”[1] let you filter logs by source, destination, service, etc., and quickly see what was happening. The GUI worked well enough, and junior admins could learn it quickly.
Most other firewalls only had syslog. That meant that your analysis tools were limited to grep and awk. Powerful yes, but a bit of a learning curve. There was also the problem of ‘saving’ a search – you’d end up hunting through your shell history, trying to recreate that 15-stage piped work of art. Splunk wasn’t around then.
Tracker has several issues:
I got caught out by Check Point’s “Install On” column recently. Most people don’t need this setting any more, but it’s still there for legacy reasons. Time to re-evaluate.
When you create a firewall policy using Check Point, you define the set of possible installation targets. That is, the firewalls that this policy may be installed on. When you compile & install policy, you can choose from this list of targets, and only this list.
Most organisations will only have one installation target per policy. But sometimes you want to have the same policy on multiple firewalls. This is pretty easy to do, and might make sense if you have many common rules.
But then you say “What if I had 30 common rules, 50 that only applied to firewall A, and another 50 that only applied to firewall B?” That’s where people start using the “Install On” column. This lets you define at a Continue reading
Last year I wrote about the IPv4 Address Transfer Process. Recently I was involved in another IPv4 transfer. I was surprised to see that IPv4 prices have fallen in the last year. I have done some rudimentary analysis of the APNIC transfer statistics to try to figure out why.
APNIC publishes statistics on transfers at ftp.apnic.net/public/transfers/apnic. These text files list all resource transfers that have taken place – the to & from organisation, the resource type, the date, etc. I am very interested in looking at the trends. How many transactions take place each month, and how many addresses are being transferred?
I wrote a simple Python script to do this analysis for me. It retrieves the latest statistics, and converts them into a Google chart:
(If you’re reading this via RSS, and the chart doesn’t display, you may need to click here to see the web version).
Note this does not do live updates. It is a point in time, generated using the current data at the time the script is run. If you would like to update the code to do live updates, fork it from Github here. I’d also love to update the script to Continue reading
Recently I’ve been musing on IT Generalists vs Specialists. We used to have more generalist roles, covering all parts of the stack. ITIL then pushed us towards greater specialisation. I believe that we’ve gone back to valuing the Generalist, as the person who can glue components together. Will the pendulum swing again?
When I started working in IT, our roles were more generalist in nature. We did everything. To set up a new app, we racked the servers and switches, installed the OS, configured the network, installed the DB & application, and made it all work.
We weren’t specialists in any one area, but we knew how everything fitted together. So if something broke, we could probably figure it out. If we had to investigate a problem, we could follow it through all layers of the stack. When we found the problem, we had license to fix it.
Sometime around the early-mid 2000s the “ITIL Consultants” moved in. Their talk of structure, processes and SLAs seduced senior management. We couldn’t just have people who Got Shit Done. No, everyone needed to be placed in a box, with formal definitions around what they could & Continue reading
NetBeez presented at Network Field Day 9, where they showed us their solution for distributed network performance monitoring. They gave the delegates a NetBeez agent to take home and test. I’ve run it for the last two months, and I’ve been happy with how it has performed.
The unit was supplied with a US power plug. I was contemplating using an adapter, but I didn’t have any spare power points near where I wanted to install it. Hmmm. Then I realised that the power connection is just a USB port anyway. The Ethernet cable needed to go into my SRX-110, so I wonder if…
Yup! Powers from the USB port on the SRX. Perfect.
The device powered on, and it soon showed up on the NetBeez web dashboard. This is where you can configure your agents, define what tests you want them to run, and see the results.
I added a few simple monitors:
All very straightforward to add the monitors, and pick which agents to run them from. Running the same tests from multiple agents gives you a distributed status Continue reading
I have used the “Solarized” colour scheme on my Mac for several years. This is:
… a sixteen color palette…designed for use with terminal and gui applications
If you spend a lot of time using the Terminal, this makes a huge difference. It gives me the right combination of colours to make sure everything is readable, and reduces eye-strain.
I’ve used it for so long that I’ve forgotten about it. It’s become “normal” for me.
Recently I’ve been forced to use PuTTY on Windows. I’d forgotten how terrible the default colour scheme is, particularly when you’re using VIM, or doing an “ls” on a RHEL system. Check this screenshot:
The default LS_COLORS on a RHEL system, using PuTTY defaults, will displays directories in dark blue on a black background. Hopeless. I can’t read those directory names.
I downloaded the “Solarized Dark” registry file from here. Double-click that to merge the registry settings. You’ll then see a new PuTTY Saved session “Solarized Dark”:
Load that session. Save it as the Default Settings if you like. Add any other settings you need – e.g. username, SSH key. Add the hostname/IP, and connect. Now see how Continue reading
I’m fundamentally lazy. That’s why automation appeals: less work for me. Get the machine to do it instead. But automating everything isn’t always the right answer. Sometimes you need to ask yourself: Does this task need to be done at all? Or can I get someone else to do it for me?
Automating tasks carries some overhead. If you’re really unlucky, you’ll end up spending more time on the automation than doing it manually:
So if you can eliminate tasks, you’re in a much better position. Here’s a few contrived examples, based around a fictitious email provider:
15-20 years ago we had limited bandwidth, and limited storage. It seemed reasonable to limit the maximum email message size. Otherwise people would send monstrous 2MB attachments. Of course, there were always ’special’ cases that needed to be able to send enormous 5MB AVI files. So we had special groups of users defined that could send large emails.
Users could put in a request to the Help Desk to get access to send large emails. That would go via some manager, who would of course approve it. Someone would then need to manually update that Continue reading
Greg Ferro recently participated in an “Ask Me Anything” thread on Reddit. In that thread, user “1DumbQuestion” made this comment:
Last, never finished my CCIE because of what I perceive will happen with SDN in the next coming years.
I’ve seen similar comments from others over the last couple of years. This concerns me because it seems that people are saying “There’s too much change going on here, and I don’t know how it will all work out. So I’ll just do nothing.”
Don’t be one of those people.
You should take a hard look at your career, and try to understand where the industry is going. If you think that CCIE study is not the best use of your time, that’s fine. But you should make a conscious choice about that. Crucially, you must decide where else to invest your time and energy.
If you firmly believe that networking will change dramatically over the next few years, then take active steps to prepare yourself. Think about your current skills, and where you have gaps. Maybe you need to learn more about Linux. Maybe it’s configuration management, or Python scripting. Put your time into Continue reading
Recently @BobMcCouch posted a photo of the contents of his bags. He’s got a lot of gear, including a hammer, and a dent-puller. He assures us that it’s for lifting tiles, but I’m not so sure. Sounds to me like he’s worried about a few dings in the supermarket carpark.
It all sounded a bit scary. I want to provide a different perspective, that of someone who tries to minimise what they carry. I don’t want young engineers to think that they have to build up a huge toolbox, and the physical strength to lug it around. You might choose to do that, but it’s not the only path.
My general rules for a laptop bag are that it should be as small as I can get away with, and it should not look too much like a laptop bag. So pretty much anything from Targus is inappropriate.
Today I use the “ Continue reading
Just a quick note about a problem I ran into with adding data groups to an F5 system using tmsh. I wanted to add a string data group containing a list of URIs mapping to other URIs. This was for use in an iRule that will redirect these URIs.
So I thought that this tmsh script would do the trick:
modify ltm data-group redir_uris records add {"/first-uri" { data "/new-uri"}} modify ltm data-group redir_uris records add {"/second-uri" { data "/new-uri"}}
Every time I tried it, I got this result:
Syntax Error: the "create" command does not accept wildcard configuration identifiers
Hmm. But I don’t have any wildcards. So what’s the problem? I couldn’t figure it out at the time, and ended up having to resort to manually entering the data group via the web interface. A bit slow, but luckily it was only around 20 entries.
Today I found out what was going wrong: SOL12999: “Data group records beginning with a slash character cannot be added using tmsh.”
Description: You cannot add data group records that begin with a slash ( / ) character to data groups using tmsh.
This issue occurs when all of the following conditions are met:
- You Continue reading
Check Point firewall upgrades have always been painful. The loss of connection state is a big part of this. Existing connections stop working, and many applications need restart. It looks like there is a way of minimising this pain on upgrade.
Stateful firewalls record the current ‘state’ of traffic passing through, so they can recognise and allow reply or related traffic. If you have a firewall cluster, they need to synchronise state between the cluster members. This is so that if there is a failover, the new Active node will be aware of all connections currently in flight.
If you have a failover, and the standby member is NOT aware of current connection state, it will drop all currently open sessions. Any packet that isn’t a SYN packet will get dropped, and the applications need to establish new connections. Some applications handle this well – especially those that use many short-lived connections such as HTTP or DNS. But other applications that have long-running connections – e.g. DB connections – may struggle with this. They think the connection is still open, and take a long time to figure out it’s broken. They may eventually recover on their own, or they may Continue reading
HP IMC installation is normally a manual process, with plenty of clickey clickey clickey. This is OK for production systems, as most sites will only have one or maybe two IMC servers. But for my lab, I wanted to automate the install, so I can quickly spin up a new lab system. I have now found an undocumented, unsupported way of doing this.
There’s two parts to this – preparing the underlying OS & DB, and installing IMC. I am writing Ansible playbooks to handle the OS + DB setup. That’s working, but it needs a bit of cleanup. Once that’s done, I’ll integrate it with Vagrant. Then I should be able to completely automate the install of a lab IMC system. I will write another post on that once it’s complete.
To install IMC silently, create an “install.cfg” file to define your settings. Then tweak the installation script to call the silent installer, not the interactive install.
The New Zealand ISP market is dominated by Spark, Vodafone & CallPus/Orcon. A side effect of this is that if one player does the Right Thing™, it really moves the needle. Recently, Spark has done the Right Thing with DNSSEC.
DNSSEC takeup has been low with New Zealand ISPs. The APNIC stats indicated that around 5% of users were using DNS resolvers that had DNSSEC validation capabilities. But in December 2014, that number jumped to ~15%:
It turns out this is because Spark has enabled DNSSEC validation on some of their resolvers. NZRS have done some analysis, and found that Spark turned on 4 new resolvers that do DNSSEC validation:
They’re still running their old resolvers, so right now it’s hit & miss for their customers. But it’s a great start, and presumably they’ll upgrade the remaining systems soon.
So Vodafone, CallPlus, Snap, Trustpower…when are you going to take customer security seriously too? And Spark…how long until DNSSEC is enabled for all your resolvers?
And please, no arguments about “we’re not sure if it will work.” Google has been doing it since March 2013…who do you think processes more DNS requests per day? Google, or your ISP?
IPv6 adoption has been slow. But I think it’s reaching a tipping point. I’m very close to calling 2015 “The year of IPv6.” There’s plenty of people who won’t believe me, but the statistics are very interesting. You need to keep a close on eye on what the data is saying.
Recently I asked the question “What percentage of Internet traffic needs to be IPv6 for you to consider IPv6 to be mainstream/arrived/the year of IPv6?”
@bobbobob had the best answer for when IPv6 can be considered ‘mainstream’:
@northlandboy When I see criminal minds, or law and order using a poorly faked IPV6 address when they're 'hacking', I'll say it's arrived.
— Rabbit Sultan (@bobbobob) February 21, 2015
But @icemarkom was probably technically correct with this answer:
@northlandboy More than 50%.
— Marko Milivojevic (@icemarkom) February 20, 2015
So how far away is that? It’s tough trying to measure IPv6 adoption. Traffic patterns are region- & user-specific. The services that Chinese users access are different to those that a New Zealand business users. Traffic is often concentrated with a few ISPs and/or a few big services (Google, Facebook, Twitter, etc).
I like to use the Google IPv6 statistics Continue reading
Monitoring needs to move on from traditional fault and performance polling. It should include identifying common misconfigurations and known faults. We’re all using the same technologies, so we’ve all got the same problems. I like the look of Indeni, a new approach to this problem. It uses a form of crowd-sourcing to act as a smart advisor.
We all think we’re precious snowflakes. But we’re not. We use the same technologies, glued together in the same ways. That means we all have the same problems, and make the same misconfigurations.
Vendors frequently publish new bug fixes, KB articles, EOS notices, etc. Some of these apply to products/versions/features we’re using. We struggle to keep up with the volume, and we miss these – so maybe our network is running with a known issue. Striking an unknown bug is bad. Getting caught out by a published issue is worse. Having an outage because we didn’t make sure the routing tables were in sync on our firewall cluster is unforgivable.
Information flow is a two-way problem. The vendors can’t always see how customers deploy their products in the real world. They think they know. They write manuals, they write Continue reading
VeloCloud was the first presenter at Network Field Day 9. They are one of the new breed of SD-WAN vendors. I’m impressed by what they’re doing, and and the potential it offers for re-thinking the way we do WAN connectivity. But I think the most interesting part is the increased visibility into how networks are performing.
I won’t go into the details of how it all works – Brandon covers some of it here, and you can look through VeloCloud’s site to understand it more. I want to focus on a few details around data analysis, and information brokerage.
In this video, Kangwarn Chinthammit talks about how VeloCloud is using their devices to monitor Internet quality. Because they’re installed in a wide range of locations, with many different WAN connection types, they’re building up some interesting data.
They’ve been able to do some deeper analysis of the data, and break down quality measurements by location, circuit type, hour, and day. Some of the interesting results include:
Cumulus Networks gave a great presentation at Network Field Day 9. They presented their vision of how they’re working to improve networking. But they were also clear about what they don’t do, and where they will instead enable others.
Many network engineers started out running cables, and doing low-level networking. They build up to designing & running more complex networks. I came at it from a different direction. I first ran Linux systems in 1999. My first professional job was working with HP-UX in 2000, and I later moved into running Check Point firewalls on Nokia IPSO. I was well-used to working with Unix-like systems, and it was completely natural to me to run tcpdump on a network device.
To become an effective network security engineer, I had to learn more about routing & switching. But because I had that *nix background, I was always frustrated by the limited capabilities offered by IOS. “| include” is a poor substitute for grep. Yes, you can do some stuff with TCL, but would you want to? Packet capture was a poor joke until recently.
So when I first heard about Cumulus, it made a Continue reading
We all know that “Change is Hard.” But often we, as engineers, focus on the technical aspects of that change. How do I minimise customer impact while upgrading those routers? How can I migrate customer data safely to the new system? But we can forget about the wider implications of what we’re doing. If we do, we may struggle to get our changes implemented, or see poor take-up of new systems.
I was talking to an engineer who had planned a huge configuration management implementation. Everything had been manually configured in the past, but this was hitting scale issues. So he had worked for months on a fully automated process. It was going to be amazing. It would configure everything, across all systems and applications. Standards enforced, apps deployments done in a repeatable way, etc. It was going to be a thing of beauty. No-one would ever need to login to a server again. Total automation.
It was all tested, and was just waiting for approval to put it into production. But for some reason, no-one was willing to give the go-ahead to roll it out. Weeks were dragging by, and things were going Continue reading
You know I’m not the biggest fan of vendor clubs (or influencer marketing programs, call them what you like). But if you’re going to do it, you might as well do it right. Don’t let it just become a ‘free T-shirt club':
If you're building the community program – ensure it scales so it doesn't end up being a free tshirt club.Connecting and Knowledge xfer=must
— Anthony Burke (@pandom_) February 5, 2015
@pandom is spot-on. The ideal community program should not just be a method to blast out press releases, or give out a few free shirts in the hope of currying favour. The program manager has taken care to select people who are positive about the company, share with the community and have opinions about where the vendor is going.
That is a valuable resource that should not be wasted. A good program should seek to engage in a two-way dialogue. Not just pushing out info, but seeking feedback on what’s working, and what’s not. Don’t just push out a few early release notices – have honest discussions about roadmaps, plans, etc. Help your members connect with each other – who knows what benefit that might lead to in future?
FWIW, I’m Continue reading
The problem with obtaining certifications is that you need to renew them. CCIE is no different – I first passed the lab in September 2012, and I was overdue for renewing it. I’m pleased to report that I have now done that, and it is now current until September 2016. Here’s some of my impressions of the 400-101 exam.
I had planned on using the CCDE written exam to renew my R&S CCIE, and then decide if I would go on to attempt the CCDE practical exam. But it seems that the CCDE exam writers and I just don’t share the same mindset. I tried, but it wasn’t working for me, and I wasn’t making progress. So I went back to R&S for my re-cert.
I originally passed version 4, exam number 350-101. This has been updated to version 5. The written exam is now 400-101. Of course, this doesn’t mean that everything changes. Core L2 & L3 protocols don’t change that much. BGP, OSPF and EIGRP and still BGP, OSPF and EIGRP.
There are some key changes though, such as: