The hardware makes the software possible but disappears quickly from view.
The post Response: Your Software Needs Hardware to Deliver appeared first on EtherealMind.
My “What Is Layer-2 and Why Do You Need It?” blog post generated numerous replies, including this one:
Pretend you are a device receiving a stream of bits. After you receive some inter-frame spacing bits, whatever comes next is the 2nd layer; whether that is Ethernet, native IP, CLNS/CLNP, whatever.
Not exactly. IP (or CLNS or CLNP) is always a layer-3 protocol regardless of where in the frame it happens to be, and some layer-2 protocols have no header (apart from inter-frame spacing and start-of-frame indicator).
Read more ...By: ASERT Research Team
On March 31st, Arbor’s Security Engineering & Response Team (ASERT) published a detailed threat brief on the Neverquest malware for Arbor customers. Along with thousands of IOC’s (indicators of compromise), the brief details Neverquest’s current inner workings and describes some reversing techniques ASERT uses to unravel and monitor this stealthy and quickly evolving malware. Applying this research at scale to malware and data acquired by our global ATLAS initiative allows us to develop targeted defenses and security context that enables customers to mitigate advanced threats and enhance their security posture over time [1].
This blog post provides excerpts from the Neverquest threat brief along with some new data that was not available at the time the brief was released to customers. In doing so, it also highlights the results of ASERT research activities that feed Arbor products.
Historical Threat Context and Prior Research
Originally, a malware family known as Ursniff was used to build newer malware called Gozi. After some success and a time of inactivity, Gozi was revitalized as Gozi Prinimalka, which has evolved into the modern Vawtrak/Neverquest (referred to as ‘Neverquest’ herein). Foundational threat analysis work has been performed for years on Continue reading
Today Puppet Labs announced that Cumulus Networks has joined its Puppet Supported Program. We’re very excited about this and, if you’re implementing a software-defined data center, you should be excited too.
Because Cumulus Linux is Linux, our customers are able to use the same tools they know and love for managing Linux servers to manage their networks. The joint integration work we’ve done means it’s easier than ever for anyone that wants to automate their data center to extend their change management procedures across both servers and switches, unifying data center and network infrastructure under a single dashboard.
Beyond the streamlining of management consoles, this integration brings a host of business benefits to any organization. For example:
When I joined CloudFlare about 18 months ago, we had just started to build out our new Data Platform. At that point, the log processing and analytics pipeline built in the early days of the company had reached its limits. This was due to the rapidly increasing log volume from our Edge Platform where we’ve had to deal with traffic growth in excess of 400% annually.
Our log processing pipeline started out like most everybody else’s: compressed log files shipped to a central location for aggregation by a motley collection of Perl scripts and C++ programs with a single PostgreSQL instance to store the aggregated data. Since then, CloudFlare has grown to serve millions of requests per second for millions of sites. Apart from the hundreds of terabytes of log data that has to be aggregated every day, we also face some unique challenges in providing detailed analytics for each of the millions of sites on CloudFlare.
For the next iteration of our Customer Analytics application, we wanted to get something up and running quickly, try out Kafka, write the aggregation application in Go, and see what could be done to scale out our trusty go-to database, PostgreSQL, from a Continue reading
More love for Linux containers (but with less 'Linux')
Could Zero Cool crack a software-defined perimeter?
I have used the “Solarized” colour scheme on my Mac for several years. This is:
… a sixteen color palette…designed for use with terminal and gui applications
If you spend a lot of time using the Terminal, this makes a huge difference. It gives me the right combination of colours to make sure everything is readable, and reduces eye-strain.
I’ve used it for so long that I’ve forgotten about it. It’s become “normal” for me.
Recently I’ve been forced to use PuTTY on Windows. I’d forgotten how terrible the default colour scheme is, particularly when you’re using VIM, or doing an “ls” on a RHEL system. Check this screenshot:
The default LS_COLORS on a RHEL system, using PuTTY defaults, will displays directories in dark blue on a black background. Hopeless. I can’t read those directory names.
I downloaded the “Solarized Dark” registry file from here. Double-click that to merge the registry settings. You’ll then see a new PuTTY Saved session “Solarized Dark”:
Load that session. Save it as the Default Settings if you like. Add any other settings you need – e.g. username, SSH key. Add the hostname/IP, and connect. Now see how Continue reading
Yeah, it looks just like this. |
Chris Wahl of WahlNetwork.com and co-author of Networking for VMware Administrators joins Ethan Banks for a discussion of when -- and when NOT -- to use VMkernel bindings when doing iSCSI plumbing between VMware hosts and storage arrays.
The post PQ Show 47 – VMKernel Bindings & iSCSI with Chris Wahl appeared first on Packet Pushers Podcast and was written by Ethan Banks.