lindsay

Author Archives: lindsay

Everything Has a Cost

Everything comes at a cost: steak dinners & pre-sales engineering has to get paid for somehow. That should be obvious to most. Feature requests also come at a cost, both upfront, and ongoing. Those ongoing costs are not always understood.

It’s easy to look at vendor gross margins, and assume that there is plenty of fat. But remember that Gross margin is just Revenue minus cost of goods sold. It’s not profit. It doesn’t include sales & marketing costs, or R&D costs. Those costs affect net income, which is ‘real’ income. Companies need to recoup those costs somehow if they want to make money. Gross margin alone doesn’t pay the bills.

Four-Legged SalesDroids, and Steak Dinners

A “four-legged sales call” is when two people show up for sales calls. The usual pattern is an Account Manager for the ‘relationship’ stuff, with a Sales Engineer acting as truth police. These calls can be very useful. It’s a good way to talk about the current business challenges, discuss product roadmaps, provide feedback on what’s working, and what’s not. The Sales Engineer can offer implementation advice, maybe help with some configuration issues.

Often a sales call includes lunch or dinner. Breaking bread together Continue reading

Recruiters: Must Try Harder

Right now, it’s an employee’s market in the Bay Area. Technology firms are growing, and they’re always trying to hire more people. So I regularly receive emails from recruiters. This is not to brag, it’s just the way things are right now, based upon the economy, my background, my current location, and my age. I’m lucky.

Some of these approaches are outstanding. Well-crafted, tailored to the person and the role. Some are pathetically bad, and I don’t know why they try.

The Right Approach

A good approach goes like this:

Hi Lindsay!

I’m a recruiter at $CoolCompany. We’re looking for great people to work on our teams doing $InterestingThingOne and $InterestingThingsTwo! We’re hoping to do This, That and the Other Thing! Check out our projects on Github <here> and <here>.

We think this would be a good match because of your background working on $RecentProject in $PreviousIndustries.

We were thinking about someone to do these sorts of things: X, Y, Z. But mainly it’s about finding the right people, and we’re fine with re-working the role a bit to suit.

Let us know what you think

Regards, Good Recruiter

The Wrong Approach

Hi

We have a job opening for a Continue reading

Recruiters: Must Try Harder

Right now, it’s an employee’s market in the Bay Area. Technology firms are growing, and they’re always trying to hire more people. So I regularly receive emails from recruiters. This is not to brag, it’s just the way things are right now, based upon the economy, my background, my current location, and my age. I’m lucky.

Some of these approaches are outstanding. Well-crafted, tailored to the person and the role. Some are pathetically bad, and I don’t know why they try.

The Right Approach

A good approach goes like this:

Hi Lindsay!

I’m a recruiter at $CoolCompany. We’re looking for great people to work on our teams doing $InterestingThingOne and $InterestingThingsTwo! We’re hoping to do This, That and the Other Thing! Check out our projects on Github <here> and <here>.

We think this would be a good match because of your background working on $RecentProject in $PreviousIndustries.

We were thinking about someone to do these sorts of things: X, Y, Z. But mainly it’s about finding the right people, and we’re fine with re-working the role a bit to suit.

Let us know what you think

Regards, Good Recruiter

The Wrong Approach

Hi

We have a job opening for a Continue reading

The Difference Between Proper Devs and Me

I spend a lot of time poking around with code, and I can figure out most integration challenges, and simple code fixes. But I do not call myself a developer. I know, we can argue about what constitutes a developer, but I don’t really want to get into that. I’d just like to highlight something that showed the difference between the futzing about that I do, and the way a senior developer thinks about problems.

StackStorm Documentation Process

We use reStructuredText for StackStorm documentation. It’s a form of markup language, with everything is written in plaintext. It gets parsed into HTML (and potentially other formats). The use of special punctuation marks and indentation tells the parser how to render the HTML - e.g. inserting links, highlighting text, bullet points, etc.

When I want to update our documentation, I create a branch on our GitHub st2docs repo. I make my changes, then create a Pull Request against the master branch. When I do this, it triggers our CircleCI checks. These checks include attempting to build the documentation, and failing if there are any parsing errors. If I’ve made a mistake in my syntax, it gets caught at this point, and Continue reading

The Difference Between Proper Devs and Me

I spend a lot of time poking around with code, and I can figure out most integration challenges, and simple code fixes. But I do not call myself a developer. I know, we can argue about what constitutes a developer, but I don’t really want to get into that. I’d just like to highlight something that showed the difference between the futzing about that I do, and the way a senior developer thinks about problems.

StackStorm Documentation Process

We use reStructuredText for StackStorm documentation. It’s a form of markup language, with everything is written in plaintext. It gets parsed into HTML (and potentially other formats). The use of special punctuation marks and indentation tells the parser how to render the HTML - e.g. inserting links, highlighting text, bullet points, etc.

When I want to update our documentation, I create a branch on our GitHub st2docs repo. I make my changes, then create a Pull Request against the master branch. When I do this, it triggers our CircleCI checks. These checks include attempting to build the documentation, and failing if there are any parsing errors. If I’ve made a mistake in my syntax, it gets caught at this point, and Continue reading

RPM Post-Upgrade Scripts

Something different today: Here’s something I learnt about RPM package management, and post-upgrade scripts. It turns out that they don’t work the way I thought they did. Post-uninstall commands are called on both uninstall and upgrade. For my own reference as much as anyone’s here some info about it, and how to deal with it.

RPM Package Management

RPM is a Linux package management system. It is a way of distributing and managing applications installed on Linux systems. Packages get distributed as .rpm files. These contain the application binaries, configuration files, and application metadata such as dependencies. They can also contain scripts to run pre- and post- installations, upgrades and removal.

Using package management systems is a vast improvement over distributing source code, or requiring users to manually copy files around and run scripts themselves.

There is some effort required to get the spec files used to create packages. But once it has been set up, it is easy to create new versions of packages, and distribute them to users. System administrators can easily check which version they’re running, check what new versions are available, and upgrade.

We use RPMs to distribute StackStorm packages for RHEL/CentOS systems. Similarly, we distribute Continue reading

RPM Post-Upgrade Scripts

Something different today: Here’s something I learnt about RPM package management, and post-upgrade scripts. It turns out that they don’t work the way I thought they did. Post-uninstall commands are called on both uninstall and upgrade. For my own reference as much as anyone’s here some info about it, and how to deal with it.

RPM Package Management

RPM is a Linux package management system. It is a way of distributing and managing applications installed on Linux systems. Packages get distributed as .rpm files. These contain the application binaries, configuration files, and application metadata such as dependencies. They can also contain scripts to run pre- and post- installations, upgrades and removal.

Using package management systems is a vast improvement over distributing source code, or requiring users to manually copy files around and run scripts themselves.

There is some effort required to get the spec files used to create packages. But once it has been set up, it is easy to create new versions of packages, and distribute them to users. System administrators can easily check which version they’re running, check what new versions are available, and upgrade.

We use RPMs to distribute StackStorm packages for RHEL/CentOS systems. Similarly, we distribute Continue reading

War Stories: Always Check Your Inputs

The extremely irregular War Stories series returns, with an anecdote from 15 years ago, investigating a problem with a web app that only seemed to crash when one particular person used it. Ultimately a simple problem, but it took me a while to track it down. I blame Perl.

ISPY With my Little Eye

“ispy” was our custom-built system that archived SMS logs from all our SMSCs, aggregating them to one server for analysis. Message contents were kept for a short period, with CDRs stored for longer (i.e. details of sending and receiving numbers, and timestamps, but no content).

The system had a web interface that support staff could use to investigate customer reports of SMS issues. They could enter source and/or destination MSISDNs, and see when messages were sent, and potentially contents. Access to contents was restricted, and was typically only used for things like abuse investigations.

This system worked well, usually.

Except when it didn’t.

Every few weeks, we’d get reports that L2 support couldn’t access the system. We’d login, see that one process was using up 99% CPU, kill it, and it would be OK for a while. Normally the system was I/O bound, so we Continue reading

War Stories: Always Check Your Inputs

The extremely irregular War Stories series returns, with an anecdote from 15 years ago, investigating a problem with a web app that only seemed to crash when one particular person used it. Ultimately a simple problem, but it took me a while to track it down. I blame Perl.

ISPY With my Little Eye

“ispy” was our custom-built system that archived SMS logs from all our SMSCs, aggregating them to one server for analysis. Message contents were kept for a short period, with CDRs stored for longer (i.e. details of sending and receiving numbers, and timestamps, but no content).

The system had a web interface that support staff could use to investigate customer reports of SMS issues. They could enter source and/or destination MSISDNs, and see when messages were sent, and potentially contents. Access to contents was restricted, and was typically only used for things like abuse investigations.

This system worked well, usually.

Except when it didn’t.

Every few weeks, we’d get reports that L2 support couldn’t access the system. We’d login, see that one process was using up 99% CPU, kill it, and it would be OK for a while. Normally the system was I/O bound, so we Continue reading

SREcon, DevOpsDays and Seattle vs Sillicon Valley

I am the Product Manager for StackStorm. This gives me the opportunity to attend several industry events. This year I attended SREcon in San Francisco, and devopsdays Seattle. I found both events interesting, but also found them more different than I expected.

SREcon Americas

This year SREcon Americas was held in San Francisco, a nice walk along the Embarcadero from where I live. This is bliss compared to my regular daily tour of the Californian outdoor antique railway museum, aka Caltrain.

According to the organizers, SREcon is:

…a gathering of engineers who care deeply about site reliability, systems engineering, and working with complex distributed systems at scale

That was pretty much true to form. Two things stood out to me:

  • The number of smart people, working on interesting problems
  • The number of companies aggressively hiring in this space.

I had many interesting conversations at SREcon. We had a booth, so would briefly start describing what StackStorm is, but would very quickly move past that. Conversations often went “Oh yeah, we’ve built something along those lines in-house, because there was nothing on the market back when we needed it. But I wouldn’t do it again. How did you solve <insert knotty Continue reading

SREcon, DevOpsDays and Seattle vs Sillicon Valley

I am the Product Manager for StackStorm. This gives me the opportunity to attend several industry events. This year I attended SREcon in San Francisco, and devopsdays Seattle. I found both events interesting, but also found them more different than I expected.

SREcon Americas

This year SREcon Americas was held in San Francisco, a nice walk along the Embarcadero from where I live. This is bliss compared to my regular daily tour of the Californian outdoor antique railway museum, aka Caltrain.

According to the organizers, SREcon is:

…a gathering of engineers who care deeply about site reliability, systems engineering, and working with complex distributed systems at scale

That was pretty much true to form. Two things stood out to me:

  • The number of smart people, working on interesting problems
  • The number of companies aggressively hiring in this space.

I had many interesting conversations at SREcon. We had a booth, so would briefly start describing what StackStorm is, but would very quickly move past that. Conversations often went “Oh yeah, we’ve built something along those lines in-house, because there was nothing on the market back when we needed it. But I wouldn’t do it again. How did you solve <insert knotty Continue reading

Savvius Insight and the use of Elastic

Last week Savvius announced upgraded versions of its Insight network visibility appliances. These have the usual performance and capacity increases you’d expect, and fill a nice gap in the market.

But the bit that was most interesting to me was the use of an on-board Elastic stack, with pre-built Kibana dashboards for visualizing network data, e.g.:

Savvius Insight Kibana Dashboard

Historically the only way we could realistically create these sorts of dashboards and systems was using Splunk. I’m a big fan of Splunk, but it has a problem: Cost. Especially if you’re trying to analyze large volumes of network data. You might be able to make Splunk pricing work for application data, but network data volumes are often just too large.

Savvius has previously included a Splunk forwarder, to make it easier to get data from their systems into Splunk. But Elastic has reached the point where Splunk is no longer needed. It’s viable for companies like Savvius to ship with a built-in Elastic stack setup.

There’s nothing stopping people centralizing the data either. You can modify the setup on the Insight appliance to send data to a central Elastic setup, and you can copy the Kibana dashboards, and create your own Continue reading

Savvius Insight and the use of Elastic

Last week Savvius announced upgraded versions of its Insight network visibility appliances. These have the usual performance and capacity increases you’d expect, and fill a nice gap in the market.

But the bit that was most interesting to me was the use of an on-board Elastic stack, with pre-built Kibana dashboards for visualizing network data, e.g.:

Savvius Insight Kibana Dashboard

Historically the only way we could realistically create these sorts of dashboards and systems was using Splunk. I’m a big fan of Splunk, but it has a problem: Cost. Especially if you’re trying to analyze large volumes of network data. You might be able to make Splunk pricing work for application data, but network data volumes are often just too large.

Savvius has previously included a Splunk forwarder, to make it easier to get data from their systems into Splunk. But Elastic has reached the point where Splunk is no longer needed. It’s viable for companies like Savvius to ship with a built-in Elastic stack setup.

There’s nothing stopping people centralizing the data either. You can modify the setup on the Insight appliance to send data to a central Elastic setup, and you can copy the Kibana dashboards, and create your own Continue reading

IPv6 Trends, SixXS Sunset and Project Planning

Native IPv6 availability continues to increase, leading to the sunset of SixXS services. But it looks like we don’t like starting any major IPv6 rollouts around Christmas/New Years, but instead start going into production from April onwards.

SixXS Sunset

In March 2017, the SixXS team announced that they are closing down all services in June 2017:

SixXS will be sunset in H1 2017. All services will be turned down on 2017-06-06, after which the SixXS project will be retired. Users will no longer be able to use their IPv6 tunnels or subnets after this date, and are required to obtain IPv6 connectivity elsewhere, primarily with their Internet service provider.

SixXS has provided a free IPv6 tunnel broker service for years, allowing people to get ‘native’ IPv6 connectivity even when their ISP didn’t offer it. A useful service in the early days of IPv6, when ISPs were dragging the chain.

But this is a Good Thing that it is now closing down. It’s closing down because their mission has been achieved, and people no longer require tunnel broker services. IPv6 is now widely available in many countries, and not just from niche ISPs. Mainstream ISPs such as Comcast in Continue reading

IPv6 Trends, SixXS Sunset and Project Planning

Native IPv6 availability continues to increase, leading to the sunset of SixXS services. But it looks like we don’t like starting any major IPv6 rollouts around Christmas/New Years, but instead start going into production from April onwards.

SixXS Sunset

In March 2017, the SixXS team announced that they are closing down all services in June 2017:

SixXS will be sunset in H1 2017. All services will be turned down on 2017-06-06, after which the SixXS project will be retired. Users will no longer be able to use their IPv6 tunnels or subnets after this date, and are required to obtain IPv6 connectivity elsewhere, primarily with their Internet service provider.

SixXS has provided a free IPv6 tunnel broker service for years, allowing people to get ‘native’ IPv6 connectivity even when their ISP didn’t offer it. A useful service in the early days of IPv6, when ISPs were dragging the chain.

But this is a Good Thing that it is now closing down. It’s closing down because their mission has been achieved, and people no longer require tunnel broker services. IPv6 is now widely available in many countries, and not just from niche ISPs. Mainstream ISPs such as Comcast in Continue reading

Website Migration Complete

I have completed migrating my website to GitHub Pages. URLs and RSS feed location should remain the same.

The only issue I’m aware of at the moment is with Disqus. I moved my Wordpress comments to Disqus prior to the switchover, but it looks like the comments are not showing up here. Hopefully will sort that out soon.

Let me know if you see any other issues.

Website Migration Complete

I have completed migrating my website to GitHub Pages. URLs and RSS feed location should remain the same.

The only issue I’m aware of at the moment is with Disqus. I moved my Wordpress comments to Disqus prior to the switchover, but it looks like the comments are not showing up here. Hopefully will sort that out soon.

Let me know if you see any other issues.

CCIE Renewed Again – Exam 400-101 v5.1

It came around again: CCIE renewal. Last time I renewed, I wasn’t sure if I should do it again. But I gave in, passed the CCIE R&S Written Exam, and moved one step closer to Emeritus. Turns out it wasn’t that bad, and I should not have put it off for so long.

Renewal Cycle

Cisco certifications below Expert level have a 3-year renewal cycle. You can renew your CCNA or CCNP certifications at any time by sitting an exam at the same level. Your 3-year cycle restarts from the day you pass that exam.

CCIE is a little different. A CCIE certification remains valid for two years from your lab date. You can sit any CCIE-level written exam to renew your CCIE certification. At that point your validity date gets extended for another two years - note that it is another two years based upon your lab date, not the date you passed your most recent re-cert exam.

If you don’t pass a written exam during the two-year period, your status goes to “Suspended.” You then have another 12 months to pass the exam, or you completely lose your CCIE status.

My renewal date was last Continue reading

Formatting Matters

Using proper formatting can make it much easier to read code and log samples. Yet so many people don’t bother putting proper formatting around blocks of text. Take some time to learn how to format text in common applications and forums, and make things easier for those trying to help you.

What’s easier to read?

This?

version: ‘2.0’

examples.mistral-yaql-st2kv-user-scope:
vars:
polo: unspecified
tasks:
task1:
action: std.noop
publish:
polo: < st2kv(‘marco’) %>
on-complete:
- fail: < $.polo != polo %>

Or this?

version: '2.0'

examples.mistral-yaql-st2kv-user-scope:
    vars:
        polo: unspecified
    tasks:
        task1:
            action: std.noop
            publish:
                polo: <% st2kv('marco') %>
            on-complete:
                - fail: <% $.polo != polo %>

Which one is easier to read? Which one lets you parse key information faster? Which one clearly shows file formatting and indentation? Obvious, right?

Yet far too often, I see people paste unformatted text into Slack, GitHub comments, and web forums. They dump huge blocks of unformatted, difficult to read code and logs. Even after repeated prompts to use proper formatting, they just dump big blocks of text.

The good thing is that it’s not that Continue reading

News at Last: It’s Extreme

We have news at last: Extreme Networks is acquiring Brocade’s Data Center Networking business. This includes the SLX, VDX and MLXe routing and switching product lines, Network Visibility and Analytics products, and most importantly, my team: StackStorm.

Extreme Networks has been around a long time - they were founded in 1996, the same year as Foundry, which was acquired by Brocade in 2008, and became my business unit. They’ve had ups and downs over the years, but business is going well right now. Their share price is up, and they have been on an acquisition spree recently, acquiring Zebra Wireless, and 3 weeks ago announcing their intention to acquire Avaya Networking.

This gives them all the pieces to provide end-to-end IP networking solutions, and gives them scale to compete.

The deal is expected to close 60 days after Broadcom completes its acquisition of Brocade, which is scheduled to happen by July 30. Until then we will continue to operate as separate businesses. We don’t know exactly what it will mean for my team, but given that network automation was explicitly mentioned in investor call, we should find a good home.

The legal nature of the company means that it Continue reading