On ISO standardization of blockchains

So ISO, the primary international standards organization, is seeking to standardize blockchain technologies. On the surface, this seems a reasonable idea, creating a common standard that everyone can interoperate with.

But it can be silly idea in practice. I mean, it should not be assumed that this is a good thing to do.

The value of official standards

You don't need the official imprimatur of a government committee for something to be a "standard". The Internet itself is a prime example of that.

In the 1980s, the ISO and the IETF (Internet Engineering Task Force) pursued competing standards for creating a world-wide "internet". The IETF was an informal group of technologist that had essentially no official standing.

The ISO version of the Internet failed. Their process was to bring multiple stakeholders from business, government, and universities together in committees to debate competing interests. The result was something so horrible that it could never work in practice.

The IETF succeeded. It consisted of engineers just building things. Rather than officially "standardized", these things were "described", so that others knew enough to build their own version that interoperated. Once lots of different people built interoperating versions of something, then it became a Continue reading

Announcement: IPS code

So after 20 years, IBM is killing off my BlackICE code created in April 1998. So it's time that I rewrite it.

BlackICE was the first "inline" intrusion-detection system, aka. an "intrusion prevention system" or IPS. ISS purchased my company in 2001 and replaced their RealSecure engine with it, and later renamed it Proventia. Then IBM purchased ISS in 2006. Now, they are formally canceling the project and moving customers onto Cisco's products, which are based on Snort.

So now is a good time to write a replacement. The reason is that BlackICE worked fundamentally differently than Snort, using protocol analysis rather than pattern-matching. In this way, it worked more like Bro than Snort. The biggest benefit of protocol-analysis is speed, making it many times faster than Snort. The second benefit is better detection ability, as I describe in this post on Heartbleed.

So my plan is to create a new project. I'll be checking in the starter bits into GitHub starting a couple weeks from now. I need to figure out a new name for the project, so I don't have to rip off a name from William Gibson like I did last time :).

Some notes:

Securing Bitcoins with TREZOR

TREZOR is a hard wallet for securely storing crypto assets such as Bitcoin, Ethereum, and Litecoin. Protection mechanisms like a mnemonic recovery seed, PIN, and encryption passphrase safeguard your assets (private keys) by requiring your physical interaction in order to make transactions. For those crypto noobies, I think it’s easiest to describe the TREZOR functionality […]

The post Securing Bitcoins with TREZOR appeared first on Overlaid.

App Highlight: Hardenize

Hardenize is a comprehensive security tool that continuously monitors the security and configuration of your domain name, email, and website. Ivan Ristić, the author of Hardenize, gave a demo of his app at our Cloudflare London HQ.



Do you know how secure your site is? View a Hardenize report on your website by clicking this button:



Interested in sharing a demo of your app at a meetup? We can help coordinate. Drop a line to [email protected].

Broken packets: IP fragmentation is flawed

As opposed to the public telephone network, the internet has a Packet Switched design. But just how big can these packets be?

CC BY 2.0 image by ajmexico, inspired by

This is an old question and the IPv4 RFCs answer it pretty clearly. The idea was to split the problem into two separate concerns:

  • What is the maximum packet size that can be handled by operating systems on both ends?

  • What is the maximum permitted datagram size that can be safely pushed through the physical connections between the hosts?

When a packet is too big for a physical link, an intermediate router might chop it into multiple smaller datagrams in order to make it fit. This process is called "forward" IP fragmentation and the smaller datagrams are called IP fragments1.

Image by Geoff Huston, reproduced with permission

The IPv4 specification defines the minimal requirements. From the RFC791:

Every internet destination must be able to receive a datagram
of 576 octets either in one piece or in fragments to
be reassembled. [...]

Every internet module must be able to forward a datagram of 68
octets without further fragmentation. [...]

The first value - Continue reading

Got my number!

After a week of waiting (why this is taking so long? this wasn’t a particularly pleasant week), I finally got my number.

Brand new JNCIE-DC #31 !!!

The main note about the lab – time management is the most important thing on the exam. Don’t rush to the keyboard, read and understand all the tasks and it’s interdependencies. Have a plan regarding order of tasks – not all tasks can be completed in order in which they written. Don’t be affraid to skip some tasks if it takes a long time.

I am quite pleased with the level of my preparation for the lab – there were no unexpected or incomprehensible tasks. General feeling about JNCIE-DC lab – this is interesting, pretty complex but fair exam. Lot of tasks on various themes, I think all themes from blueprint are covered in the lab in some ways.

As the proctor told me, the main difficulty of this exam is that it’s something new, and people are afraid of a new and unexpected. I want to tell you – don’t be afraid! If you’re interested in learning a Juniper way of building Data Center networks, and also want to earn one more pretty Continue reading

Stuff The Internet Says On Scalability For August 18th, 2017

    Sorry about missing last week, but my birthday won out over working: 

     

    Ouch! @john_overholt: My actual life is now a science exhibit about the primitive conditions of the past.

    If you like this sort of Stuff then please support me on Patreon.

     

  • 1PB: SSD in 1U chassis; 90%: savings using EC2 Spot for containers; 16: forms of inertia; $2.1B: Alibaba’s profit; 22.6B: app downloads in Q2; 25%: Google generated internet traffic; 20 by 20 micrometers: quantum random number generators; 16: lectures on Convolutional Neural Networks for Visual Recognition; 25,000: digitized gramophone records; 280%: increase in IoT attacks; 6.5%: world's GDP goes to subsidizing fossil fuel; 832 TB: ZFS on Linux;  $250,000: weekly take from breaking slot machines; 30: galatic message exchanges using artificial megastructures in 100,000 years; 

  • Quotable Quotes:
    • @chris__martin: ALIENS: we bring you a gift of reliable computing technol--
      HUMANS: oh no we have that already but JS is easier to hire for
    • @rakyll: "You woman, you like intern." I interned on F-16's flight computer. Even my internship was 100x more legit than any job you will have.
    • @CodeWisdom Continue reading

Automating Documentation

Tedium is the enemy of productivity. The fastest way for a task to not be done is to make it long, boring, and somewhat complicated. People who feel that something is tedious or repetitive are the ones more likely to marginalize a task. And I think I speak for the entire industry when I say that there is no task more tedious and boring than documentation. So how can we fix it?

Tell Me What You Did

I’m not a huge fan of documentation. When I decide on a plan of action, I rarely write it down step-by-step unless I’m trying to train someone. Even then, it looks more like notes with keywords instead of a narrative to follow. It’s a habit that has been borne out of years of firefighting in networks and calls to “do it faster”. The essential items of a task are refined and reduced until all that remains is the work and none of the ancillary items, like documentation.

Based on my previous life as a network engineer, I can honestly say that I’m not alone in this either. My old company made lots of money doing network discovery engagements. Sometimes these came because the Continue reading