Unix tips: Saving time by repeating history

Getting work done faster on the command line is one of the never changing goals of Unix sysadmins. And one way to do this is to find easy ways to reuse commands that you have entered previously – particularly if those commands are complex or tricky to remember. Some of the ways we do this include putting the commands in scripts and turning them into aliases. Another way is to reissue commands that you have entered recently by pulling them from your command history and reusing them with or without changes. The easiest and most intuitive way to reissue commands is by using the up and down arrows on your keyboard to scroll through previously entered commands. How far back you can scroll will depend on the size of your history buffer. Most people set their history buffers to hold something between 100 and 1,000 commands but some go way beyond that. Hitting the up arrow 732 times might try your patience, but there are are fortunately easy ways to get what you need without wearing out your finger tip! To make this post a little easier to follow, I'm using a modest HISTSIZE setting. You can view your Continue reading

Amazon Gets Serious About GPU Compute On Clouds

In the public cloud business, scale is everything – hyper, in fact – and having too many different kinds of compute, storage, or networking makes support more complex and investment in infrastructure more costly. So when a big public cloud like Amazon Web Services invests in a non-standard technology, that means something. In the case of Nvidia’s Tesla accelerators, it means that GPU compute has gone mainstream.

It may not be obvious, but AWS tends to hang back on some of the Intel Xeon compute on its cloud infrastructure, at least compared to the largest supercomputer centers and hyperscalers like

Amazon Gets Serious About GPU Compute On Clouds was written by Timothy Prickett Morgan at The Next Platform.

You will be using mobile VR and AR in two years—even if you don’t believe it

Casual mobile virtual reality (VR) will eat the world when Google announces its Daydream VR platform with its six hardware partners in October. Within two years, millions of consumers will become accustomed to using augmented reality (AR) and VR, casually, like they use GPS and voice to text now because there will be a VR app for that—whatever that is. Extending VR into the mobile app ecosystem will produce VR use cases that haven’t dawned on the average consumer.+ Also on Network World: Google Daydream is a contrarian platform bet on mobile virtual reality +To read this article in full or to leave a comment, please click here

Was Trump bitten by Twitter time-stamp bug that stung Alec Baldwin’s wife?

The answer is almost certainly no, but …If you’ve been following the political news today, one joyously mocked aspect of Donald Trump’s latest Twitter rant early this morning has been that one of the tweets was apparently sent at 3:20 a.m. I say apparently – despite the clearly visible 3:20 a.m. time-stamp – because Twitter time-stamps have been known to go haywire in the past, sometimes causing problems, such as when the bug made it appear that Alec Baldwin’s wife Hilaria had tweeted idle pleasantries during the June 2013 funeral of Sopranos star James Gandolfini. Hilaria had done no such thing, but erroneous reports to the contrary sparked by the bug caused her husband to blow a gasket.To read this article in full or to leave a comment, please click here

Splunk intent on extending cybersecurity leadership

I attended the Splunk user conference earlier this week (.Conf2016) and came away pretty impressed. Since I started watching Splunk years ago, the company climbed from a freemium log management and query tool for IT and security nerds to one of the leading security analytics and operations platform. Not surprisingly then, security now represents around 40 percent of Splunk’s revenue. Given the state of the cybersecurity market, Splunk wants to work with existing customers and get new ones to join in to build on this financial and market success.To that end, Splunk really highlighted three enhancements for its enterprise security product:1. An ecosystem and architecture for incident response. Splunk often acts as a security nexus for its customers, integrating disparate data into a common platform. It now wants to extend this position from analytics to incident response by building IR capabilities into its own software and extending this architecture to partners through APIs, workflows and automation. Splunk calls this adaptive response. For now, Splunk doesn’t see itself as an IR automation and orchestration platform for complex enterprise environments (in fact Phantom and ServiceNow were both exhibiting at the event), but it does want to use its Continue reading

Splunk intent on extending cybersecurity leadership

I attended the Splunk user conference earlier this week (.Conf2016) and came away pretty impressed. Since I started watching Splunk years ago, the company climbed from a freemium log management and query tool for IT and security nerds to one of the leading security analytics and operations platform. Not surprisingly then, security now represents around 40% of Splunk’s revenue.  Given the state of the cybersecurity market, Splunk wants to work with existing customers and get new ones to join in to build on this financial and market success.To that end, Splunk really highlighted three enhancements for its enterprise security product:1.      An ecosystem and architecture for incident response.  Splunk often acts as a security nexus for its customers, integrating disparate data into a common platform.  It now wants to extend this position from analytics to incident response by building IR capabilities into its own software and extending this architecture to partners through APIs, workflows, and automation.  Splunk calls this adaptive response.  For now, Splunk doesn’t see itself as an IR automation and orchestration platform for complex enterprise environments (in fact Phantom and ServiceNow were both exhibiting at the event) but it does Continue reading

White House asks: Do you need more data portability?

It’s a question of who controls your data – all of it. Think of all the data that say Apple, Google or Facebook or even your health care provider has collected on you and you wanted to remove it or move it elsewhere. It wouldn’t be easy.The White House Office of Science and Technology Policy (OSTP) has issued a request for information about how much is too much or too little data portability and what are the implications?+More on Network World: The weirdest, wackiest and coolest sci/tech stories of 2016 (so far!)+To read this article in full or to leave a comment, please click here

White House asks: Do you need more data portability?

It’s a question of who controls your data – all of it. Think of all the data that say Apple, Google or Facebook or even your health care provider has collected on you and you wanted to remove it or move it elsewhere. It wouldn’t be easy.The White House Office of Science and Technology Policy (OSTP) has issued a request for information about how much is too much or too little data portability and what are the implications?+More on Network World: The weirdest, wackiest and coolest sci/tech stories of 2016 (so far!)+To read this article in full or to leave a comment, please click here

Fun in the Lab: IWAN, LiveAction, Prime, UDP Director

Okay… so just some major geeky fun in the lab.  I had lots of fun doing it… so why not share it with you and let you in on some geeky fun? Thirty-eight minute YouTube with a PDF guide book. ?  Little bit of this… little bit of that.

geeky_fun_overview

  • Lancope UDP Director,
  • LiveAction,
  • Spirent TestCenter,
  • IWAN
  • Prime.

Pdf of slides

Breakdown of YouTube sections and corresponding approximate timestamps:

  • Overview – start til ~6 minutes in
  • IWAN Policy & Status – 6:10 til 14:20
    • Check IWAN MC Policy & Status
    • At Store1 check IWAN status
    • Check traffic – EF & CS1
  • Monitoring Traffic Flows: 14:20 til 20:20
    • In LiveAction see the traffic flows
    • In Prime’s new IWAN PfR monitoring look for traffic flows
  • Lancope UDP Director & Troubleshooting: 20:20 – 27:20
    • Troubleshoot in Lancope UDP Director
    • Find missing forwarding rules
    • Fix missing forwarding rules
    • Sniffer Capture
  • Monitoring Traffic Flows : 27:20 – 28:20
    • In Prime see the traffic flows
  • Impairment & Traffic Flows: 28:20 – 38:00
    • Cause delay on MPLS at Store 1
    • Verify LiveAction, Prime and CLI all see the same

 

 

Why automation doubles IT outsourcing cost savings

Outsourcing consultancy and research firm Information Services Group (ISG) this week unveiled a new research report to quantify the cost savings and productivity gains from automating IT services.The inaugural Automation Index shows improvements in productivity fueled by automation can more than double the cost savings typically derived from outsourcing IT. Total cost reduction ranged from 26 percent to 66 percent, depending on the service tower, with 14 to 28 percentage points of these savings directly attributable to automation, according to ISG. (The typical cost savings from labor arbitrage and process improvements alone range from 20 percent to 30 percent).To read this article in full or to leave a comment, please click here

Firefox blocks websites with vulnerable encryption keys

To protect users from cryptographic attacks that can compromise secure web connections, the popular Firefox browser will block access to HTTPS servers that use weak Diffie-Hellman keys.Diffie-Hellman is a key exchange protocol that is slowly replacing the widely used RSA key agreement for the TLS  (Transport Layer Security) protocol. Unlike RSA, Diffie-Hellman can be used with TLS's ephemeral modes, which provide forward secrecy -- a property that prevents the decryption of previously captured traffic if the key is cracked at a later time.However, in May 2015 a team of researchers devised a downgrade attack that could compromise the encryption connection between browsers and servers if those servers supported DHE_EXPORT, a version of Diffie-Hellman key exchange imposed on exported cryptographic systems by the U.S. National Security Agency in the 1990s and which limited the key size to 512 bits. In May 2015 around 7 percent of websites on the internet were vulnerable to the attack, which was dubbed LogJam.To read this article in full or to leave a comment, please click here

Firefox blocks websites with vulnerable encryption keys

To protect users from cryptographic attacks that can compromise secure web connections, the popular Firefox browser will block access to HTTPS servers that use weak Diffie-Hellman keys.Diffie-Hellman is a key exchange protocol that is slowly replacing the widely used RSA key agreement for the TLS  (Transport Layer Security) protocol. Unlike RSA, Diffie-Hellman can be used with TLS's ephemeral modes, which provide forward secrecy -- a property that prevents the decryption of previously captured traffic if the key is cracked at a later time.However, in May 2015 a team of researchers devised a downgrade attack that could compromise the encryption connection between browsers and servers if those servers supported DHE_EXPORT, a version of Diffie-Hellman key exchange imposed on exported cryptographic systems by the U.S. National Security Agency in the 1990s and which limited the key size to 512 bits. In May 2015 around 7 percent of websites on the internet were vulnerable to the attack, which was dubbed LogJam.To read this article in full or to leave a comment, please click here

Stuff The Internet Says On Scalability For September 30th, 2016

Hey, it's HighScalability time:

 

Everything is a network. Map showing the global genetic interaction network of a cell. 

 

If you like this sort of Stuff then please support me on Patreon.

  • 18: Google can now drink and drive in Washington DC.; $10 billion: cost of a Vision Quest to Mars; 620 Gbps: DDoS attack on KrebsOnSecurity; 1 Tbps: DDoS attack on OVH; $200,000: cost of a typical cyber incident; 8 million: video training dataset labeled with 4800 labels; 180: Amazon warehouses in the US; 10: bits of info per photon; 16: GPUs in new AI killer P2 instance type;

  • Quotable Quotes:
    • @markmccaughrean: 1,000,000 people to Mars in 100 yrs. 10 people/launch? That's 3 a day, every day, for a century. 1% failure rate? One explosion every month
    • @jeremiahg: Any sufficiently advanced exploit is indistinguishable from a 400lb hacker.
    • BrianKrebs: I suggested to Mr. Wright perhaps a better comparison was that ne’er-do-wells now have a virtually limitless supply of Stormtrooper clones that can be conscripted into an attack at a moment’s notice.
    • Sonia: Academia’s not-so-subtle distain for applied research does more than damage a few promising careers; it Continue reading

LILEE Systems’ new fog computing platform is well suited to distributed enterprises  

This column is available in a weekly newsletter called IT Best Practices.  Click here to subscribe.  Location, location, location! It turns out that mantra is not just for the real estate market. Location is a critical aspect of fog computing as well.Cisco introduced the notion of fog computing about two and a half years ago. (See Cisco unveils 'fog computing' to bridge clouds and the Internet of Things.) This distributed computing architecture addresses the challenge of backhauling a lot of raw data generated in the field –say from thousands or millions of IoT devices – to the cloud for analysis.To read this article in full or to leave a comment, please click here

Tips to help select and manage your co-location vendor

The percentage of IT processed at in-house sites has remained steady at around 70 percent, but data points to a major shift to co-location and cloud for new workloads in the coming years.Half of senior IT execs expect the majority of their IT workloads to reside off-premise in the future, according to Uptime Institute’s sixth annual Data Center Industry Survey. Of those, 70 percent expect that shift to happen by 2020.+ Also on Network World: 10 tips for a successful cloud plan +It is hard to predict what percentage will go to public cloud, but a significant portion of those workloads will be shifting to co-location providers—companies that provide data center facilities and varying levels of operations management and support. To read this article in full or to leave a comment, please click here