Was Trump bitten by Twitter time-stamp bug that stung Alec Baldwin’s wife?

The answer is almost certainly no, but …If you’ve been following the political news today, one joyously mocked aspect of Donald Trump’s latest Twitter rant early this morning has been that one of the tweets was apparently sent at 3:20 a.m. I say apparently – despite the clearly visible 3:20 a.m. time-stamp – because Twitter time-stamps have been known to go haywire in the past, sometimes causing problems, such as when the bug made it appear that Alec Baldwin’s wife Hilaria had tweeted idle pleasantries during the June 2013 funeral of Sopranos star James Gandolfini. Hilaria had done no such thing, but erroneous reports to the contrary sparked by the bug caused her husband to blow a gasket.To read this article in full or to leave a comment, please click here

Splunk intent on extending cybersecurity leadership

I attended the Splunk user conference earlier this week (.Conf2016) and came away pretty impressed. Since I started watching Splunk years ago, the company climbed from a freemium log management and query tool for IT and security nerds to one of the leading security analytics and operations platform. Not surprisingly then, security now represents around 40 percent of Splunk’s revenue. Given the state of the cybersecurity market, Splunk wants to work with existing customers and get new ones to join in to build on this financial and market success.To that end, Splunk really highlighted three enhancements for its enterprise security product:1. An ecosystem and architecture for incident response. Splunk often acts as a security nexus for its customers, integrating disparate data into a common platform. It now wants to extend this position from analytics to incident response by building IR capabilities into its own software and extending this architecture to partners through APIs, workflows and automation. Splunk calls this adaptive response. For now, Splunk doesn’t see itself as an IR automation and orchestration platform for complex enterprise environments (in fact Phantom and ServiceNow were both exhibiting at the event), but it does want to use its Continue reading

Splunk intent on extending cybersecurity leadership

I attended the Splunk user conference earlier this week (.Conf2016) and came away pretty impressed. Since I started watching Splunk years ago, the company climbed from a freemium log management and query tool for IT and security nerds to one of the leading security analytics and operations platform. Not surprisingly then, security now represents around 40% of Splunk’s revenue.  Given the state of the cybersecurity market, Splunk wants to work with existing customers and get new ones to join in to build on this financial and market success.To that end, Splunk really highlighted three enhancements for its enterprise security product:1.      An ecosystem and architecture for incident response.  Splunk often acts as a security nexus for its customers, integrating disparate data into a common platform.  It now wants to extend this position from analytics to incident response by building IR capabilities into its own software and extending this architecture to partners through APIs, workflows, and automation.  Splunk calls this adaptive response.  For now, Splunk doesn’t see itself as an IR automation and orchestration platform for complex enterprise environments (in fact Phantom and ServiceNow were both exhibiting at the event) but it does Continue reading

White House asks: Do you need more data portability?

It’s a question of who controls your data – all of it. Think of all the data that say Apple, Google or Facebook or even your health care provider has collected on you and you wanted to remove it or move it elsewhere. It wouldn’t be easy.The White House Office of Science and Technology Policy (OSTP) has issued a request for information about how much is too much or too little data portability and what are the implications?+More on Network World: The weirdest, wackiest and coolest sci/tech stories of 2016 (so far!)+To read this article in full or to leave a comment, please click here

White House asks: Do you need more data portability?

It’s a question of who controls your data – all of it. Think of all the data that say Apple, Google or Facebook or even your health care provider has collected on you and you wanted to remove it or move it elsewhere. It wouldn’t be easy.The White House Office of Science and Technology Policy (OSTP) has issued a request for information about how much is too much or too little data portability and what are the implications?+More on Network World: The weirdest, wackiest and coolest sci/tech stories of 2016 (so far!)+To read this article in full or to leave a comment, please click here

Fun in the Lab: IWAN, LiveAction, Prime, UDP Director

Okay… so just some major geeky fun in the lab.  I had lots of fun doing it… so why not share it with you and let you in on some geeky fun? Thirty-eight minute YouTube with a PDF guide book. ?  Little bit of this… little bit of that.

geeky_fun_overview

  • Lancope UDP Director,
  • LiveAction,
  • Spirent TestCenter,
  • IWAN
  • Prime.

Pdf of slides

Breakdown of YouTube sections and corresponding approximate timestamps:

  • Overview – start til ~6 minutes in
  • IWAN Policy & Status – 6:10 til 14:20
    • Check IWAN MC Policy & Status
    • At Store1 check IWAN status
    • Check traffic – EF & CS1
  • Monitoring Traffic Flows: 14:20 til 20:20
    • In LiveAction see the traffic flows
    • In Prime’s new IWAN PfR monitoring look for traffic flows
  • Lancope UDP Director & Troubleshooting: 20:20 – 27:20
    • Troubleshoot in Lancope UDP Director
    • Find missing forwarding rules
    • Fix missing forwarding rules
    • Sniffer Capture
  • Monitoring Traffic Flows : 27:20 – 28:20
    • In Prime see the traffic flows
  • Impairment & Traffic Flows: 28:20 – 38:00
    • Cause delay on MPLS at Store 1
    • Verify LiveAction, Prime and CLI all see the same

 

 

Why automation doubles IT outsourcing cost savings

Outsourcing consultancy and research firm Information Services Group (ISG) this week unveiled a new research report to quantify the cost savings and productivity gains from automating IT services.The inaugural Automation Index shows improvements in productivity fueled by automation can more than double the cost savings typically derived from outsourcing IT. Total cost reduction ranged from 26 percent to 66 percent, depending on the service tower, with 14 to 28 percentage points of these savings directly attributable to automation, according to ISG. (The typical cost savings from labor arbitrage and process improvements alone range from 20 percent to 30 percent).To read this article in full or to leave a comment, please click here

Firefox blocks websites with vulnerable encryption keys

To protect users from cryptographic attacks that can compromise secure web connections, the popular Firefox browser will block access to HTTPS servers that use weak Diffie-Hellman keys.Diffie-Hellman is a key exchange protocol that is slowly replacing the widely used RSA key agreement for the TLS  (Transport Layer Security) protocol. Unlike RSA, Diffie-Hellman can be used with TLS's ephemeral modes, which provide forward secrecy -- a property that prevents the decryption of previously captured traffic if the key is cracked at a later time.However, in May 2015 a team of researchers devised a downgrade attack that could compromise the encryption connection between browsers and servers if those servers supported DHE_EXPORT, a version of Diffie-Hellman key exchange imposed on exported cryptographic systems by the U.S. National Security Agency in the 1990s and which limited the key size to 512 bits. In May 2015 around 7 percent of websites on the internet were vulnerable to the attack, which was dubbed LogJam.To read this article in full or to leave a comment, please click here

Firefox blocks websites with vulnerable encryption keys

To protect users from cryptographic attacks that can compromise secure web connections, the popular Firefox browser will block access to HTTPS servers that use weak Diffie-Hellman keys.Diffie-Hellman is a key exchange protocol that is slowly replacing the widely used RSA key agreement for the TLS  (Transport Layer Security) protocol. Unlike RSA, Diffie-Hellman can be used with TLS's ephemeral modes, which provide forward secrecy -- a property that prevents the decryption of previously captured traffic if the key is cracked at a later time.However, in May 2015 a team of researchers devised a downgrade attack that could compromise the encryption connection between browsers and servers if those servers supported DHE_EXPORT, a version of Diffie-Hellman key exchange imposed on exported cryptographic systems by the U.S. National Security Agency in the 1990s and which limited the key size to 512 bits. In May 2015 around 7 percent of websites on the internet were vulnerable to the attack, which was dubbed LogJam.To read this article in full or to leave a comment, please click here

Stuff The Internet Says On Scalability For September 30th, 2016

Hey, it's HighScalability time:

 

Everything is a network. Map showing the global genetic interaction network of a cell. 

 

If you like this sort of Stuff then please support me on Patreon.

  • 18: Google can now drink and drive in Washington DC.; $10 billion: cost of a Vision Quest to Mars; 620 Gbps: DDoS attack on KrebsOnSecurity; 1 Tbps: DDoS attack on OVH; $200,000: cost of a typical cyber incident; 8 million: video training dataset labeled with 4800 labels; 180: Amazon warehouses in the US; 10: bits of info per photon; 16: GPUs in new AI killer P2 instance type;

  • Quotable Quotes:
    • @markmccaughrean: 1,000,000 people to Mars in 100 yrs. 10 people/launch? That's 3 a day, every day, for a century. 1% failure rate? One explosion every month
    • @jeremiahg: Any sufficiently advanced exploit is indistinguishable from a 400lb hacker.
    • BrianKrebs: I suggested to Mr. Wright perhaps a better comparison was that ne’er-do-wells now have a virtually limitless supply of Stormtrooper clones that can be conscripted into an attack at a moment’s notice.
    • Sonia: Academia’s not-so-subtle distain for applied research does more than damage a few promising careers; it Continue reading

LILEE Systems’ new fog computing platform is well suited to distributed enterprises  

This column is available in a weekly newsletter called IT Best Practices.  Click here to subscribe.  Location, location, location! It turns out that mantra is not just for the real estate market. Location is a critical aspect of fog computing as well.Cisco introduced the notion of fog computing about two and a half years ago. (See Cisco unveils 'fog computing' to bridge clouds and the Internet of Things.) This distributed computing architecture addresses the challenge of backhauling a lot of raw data generated in the field –say from thousands or millions of IoT devices – to the cloud for analysis.To read this article in full or to leave a comment, please click here

Tips to help select and manage your co-location vendor

The percentage of IT processed at in-house sites has remained steady at around 70 percent, but data points to a major shift to co-location and cloud for new workloads in the coming years.Half of senior IT execs expect the majority of their IT workloads to reside off-premise in the future, according to Uptime Institute’s sixth annual Data Center Industry Survey. Of those, 70 percent expect that shift to happen by 2020.+ Also on Network World: 10 tips for a successful cloud plan +It is hard to predict what percentage will go to public cloud, but a significant portion of those workloads will be shifting to co-location providers—companies that provide data center facilities and varying levels of operations management and support. To read this article in full or to leave a comment, please click here

Tips to help select and manage your co-location vendor

The percentage of IT processed at in-house sites has remained steady at around 70 percent, but data points to a major shift to co-location and cloud for new workloads in the coming years.Half of senior IT execs expect the majority of their IT workloads to reside off-premise in the future, according to Uptime Institute’s sixth annual Data Center Industry Survey. Of those, 70 percent expect that shift to happen by 2020.+ Also on Network World: 10 tips for a successful cloud plan +It is hard to predict what percentage will go to public cloud, but a significant portion of those workloads will be shifting to co-location providers—companies that provide data center facilities and varying levels of operations management and support. To read this article in full or to leave a comment, please click here

33% off Kinivo 5 Port HDMI Switch With Auto-Switching & Remote – Deal Alert

This highly rated splitter from Kinivo takes 5 HDMI inputs from your various devices, and outputs them to one HDMI connection. Ideal for TVs that just don't have that many HDMI inputs. 501BN will automatically switch to the currently active input source if there is only one active input. If there are multiple active inputs, you can simply select using the IR remote or using the selector button on the unit itself. Supports video up to 1080p and 3D as well. The item currently averages 4.5 out of 5 stars on Amazon from over 9,000 customers (read reviews) and its list price of $59.99 is currently discounted 33% to $39.99.To read this article in full or to leave a comment, please click here

The Overlay Problem: Getting In and Out

I've been researching overlay network strategies recently. There are plenty of competing implementations available, employing various encapsulations and control plane designs. But every design I've encountered seems ultimately hampered by the same issue: scalability at the edge.

Why Build an Overlay?

Imagine a scenario where we've got 2,000 physical servers split across 50 racks. Each server functions as a hypervisor housing on average 100 virtual machines, resulting in a total of approximately 200,000 virtual hosts (~4,000 per rack).

In an ideal world, we could allocate a /20 of IPv4 space to each rack. The top-of-rack (ToR) L3 switches in each rack would advertise this /20 northbound toward the network core, resulting in a clean, efficient routing table in the core. This is, of course, how IP was intended to function.

Unfortunately, this approach isn't usually viable in the real world because we need to preserve the ability to move a virtual machine from one hypervisor to another (often residing in a different rack) without changing its assigned IP address. Establishing the L3 boundary at the ToR switch prevents us from doing this efficiently.

Continue reading · 22 comments