Can IoT platforms from Apple, Google and Samsung make home automation systems more secure?

In August 2017, a new botnet called WireX appeared and began causing damage by launching significant DDoS attacks. The botnet counted tens of thousands of nodes, most of which appeared to be hacked Android mobile devices.

There are a few important aspects of this story.

First, tracking the botnet down and mitigating its activities was part of a wide collaborative effort by several tech companies. Researchers from Akamai, Cloudflare, Flashpoint, Google, Oracle Dyn, RiskIQ, Team Cymru, and other organizations cooperated to combat this botnet. This is a great example of Collaborative Security in practice.

Second, while researchers shared the data, analysed the signatures, and were able to track a set of malware apps, Google played an important role in cleaning them up from the Play Store and infected devices.

Its Verify Apps is a cloud-based service that proactively checks every application prior to install to determine if the application is potentially harmful, and subsequently rechecks devices regularly to help ensure they’re safe. Verify Apps checks more than 6 billion instances of installed applications and scans around 400 million devices per day.

In the case of WireX, the apps had previously passed the checks. But thanks to the researcher’s findings, Google Continue reading

Out of the Section 230 Weeds: Internet Publisher-Providers

On Tuesday, the U.S. Congress continued to grapple with the potential implications of the Stop Enabling Sex Traffickers Act (SESTA). SESTA would carve out an exception to Section 230 of the 1996 Communications Decency Act, which is considered a bedrock upon which the modern Internet has flourished. If SESTA became law, websites that host ads for sex with children would be not be immune from state prosecutions and private lawsuits [although under 320(c)(1), websites are already subject to federal criminal law statutes].

Section 230 of the Communications Decency Act (c)(1) states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 230(c)(2) protects actors who proactively block and screen for offensive material. These provisions have allowed the Internet to grow and develop without the threat of lawsuits smothering its potential. If the websites of 1990 had been liable for everything their users posted, the Internet would look very different today.

Since 1996, the Internet has dramatically changed in ways unanticipated by the Communications Decency Act. The Internet provides the platform to publish material that can reach enormous numbers of people around Continue reading

Browser hacking for 280 character tweets

Twitter has raised the limit to 280 characters for a select number of people. However, they left open a hole, allowing anybody to make large tweets with a little bit of hacking. The hacking skills needed are basic hacking skills, which I thought I'd write up in a blog post.


Specifically, the skills you will exercise are:

  • basic command-line shell
  • basic HTTP requests
  • basic browser DOM editing

The short instructions

The basic instructions were found in tweets like the following:

These instructions are clear to the average hacker, but of course, a bit difficult for those learning hacking, hence this post.

The command-line

The basics of most hacking start with knowledge of the command-line. This is the "Terminal" app under macOS or cmd.exe under Windows. Almost always when you see hacking dramatized in the movies, they are using the command-line.

When disasters strike, edge computing must kick in

Edge computing and fog networks must be programmed to kick in when the internet fails during disasters, a scientific research team says. That way, emergency managers can draw on impacted civilians’ location data, social networking images and tweets and use them to gain situational awareness of scenes.Routers, mobile phones and other devices should continue to collect social sensor data during these events, but instead of first attempting to send it through to traditional cloud-based depositories operated by the social network — which are unavailable due to the outage — the geo-distributed devices should divert the data to local edge computing, fog nodes and other hardened resources. Emergency officials can then access it.To read this article in full or to leave a comment, please click here

When disasters strike, edge computing must kick in

Edge computing and fog networks must be programmed to kick in when the internet fails during disasters, a scientific research team says. That way, emergency managers can draw on impacted civilians’ location data, social networking images and tweets and use them to gain situational awareness of scenes.Routers, mobile phones and other devices should continue to collect social sensor data during these events, but instead of first attempting to send it through to traditional cloud-based depositories operated by the social network — which are unavailable due to the outage — the geo-distributed devices should divert the data to local edge computing, fog nodes and other hardened resources. Emergency officials can then access it.To read this article in full or to leave a comment, please click here

Faster Builds with Ansible Container 0.9.2

Ansible Container 0.9.2

The focus for the latest release of Ansible Container is on making builds faster through the availability of pre-baked Conductor images. The release landed this week thanks to the dedication of Joshua ‘jag’ Ginsberg, Ansible’s Chief Architect, who managed to put the finishing touches on the release while at AnsibleFest San Francisco.

The Ansible Container project is dedicated to helping Ansible users re-use existing Ansible roles and playbooks to build containers, and deploy applications to OpenShift. The Conductor container is at the center of building, orchestrating, and deploying containers. It’s the engine that makes it all work, and it brings with it a copy of Ansible, a Python runtime, docker packages, and other dependencies.

The first step, before any serious work gets done by the command line tool, is standing up a Conductor container. And up until now, that meant building the image from scratch, and waiting through all the package downloading and installing. This happens at the start of a project, and repeats anytime you find yourself needing to rebuild from scratch.

With this release, the team has made available a set of pre-baked images based on several distributions that are popular within the community. These images are currently Continue reading

IDG Contributor Network: 4G LTE internet is a network-saver

4G LTE Internet is an under-utilized asset for your company’s network… and your sanity.As someone who’s owned a business telecom, Internet, and cloud brokerage for 14 years [shameless plug], I’ve had my share of drama surrounding circuits taking too long to install. Whether it’s fiber taking a year to get built-out, or a T1 taking 6 weeks to install (when our customer’s business was relocating in 4), being at the mercy of an ISP’s unexplainable, bureaucratic timeline has been the most stressful part of my job.To read this article in full or to leave a comment, please click here

IDG Contributor Network: 4G LTE internet is a network-saver

4G LTE Internet is an under-utilized asset for your company’s network… and your sanity.As someone who’s owned a business telecom, Internet, and cloud brokerage for 14 years [shameless plug], I’ve had my share of drama surrounding circuits taking too long to install. Whether it’s fiber taking a year to get built-out, or a T1 taking 6 weeks to install (when our customer’s business was relocating in 4), being at the mercy of an ISP’s unexplainable, bureaucratic timeline has been the most stressful part of my job.To read this article in full or to leave a comment, please click here

Aligning Your Team around Microservices When There’s No Precise Definition

This is a guest post by Roger Jin, Software Architect at ButterCMS and co-author of Microservices for Startups.

For a profession that stresses the importance of naming things well, we've done ourselves a disservice with microservices. The problem is that that there is nothing inherently "micro" about microservices. Some can be small, but size is relative and there's no standard of unit of measure across organizations. A "small" service at one company might be one million lines of code while far less at another.

Some argue that microservices aren’t a new thing at all and rather a rebranding of Service Oriented Architectures, while others advocate for viewing microservices as an implementation of SOA similar to how Scrum is an implementation of Agile.

How do you align your team when no precise definitions of microservices exist? The most important thing when talking about microservices on a team is to ensure that you are grounded in a common starting point.

But ambiguous definitions don’t help with this. It would be like trying to put Agile into practice without context for what you are trying to achieve, or an understanding of precise methodologies like Scrum.

Finding common ground 

Real-time visibility and control of campus networks

Many of the examples on this blog describe network visibility driven control of data center networks. However, campus networks face many similar challenges and the availability of industry standard sFlow telemetry and RESTful control APIs in campus switches make it possible to apply feedback control.

HPE Aruba has an extensive selection of campus switches that combine programmatic control via a REST API with hardware sFlow support:
  • Aruba 2530 
  • Aruba 2540 
  • Aruba 2620
  • Aruba 2930F
  • Aruba 2930M
  • Aruba 3810
  • Aruba 5400R
  • Aruba 8400
 This article presents an example of implementing quota controls using HPE Aruba switches.
Typically, a small number of hosts are responsible for the majority of traffic on the network: identifying those hosts, and applying controls to their traffic to prevent them from unfairly dominating, ensures fair access to all users.

Peer-to-peer protocols (P2P) pose some unique challenges:
  • P2P protocols make use of very large numbers of connections in order to quickly transfer data. The large number of connections allows a P2P user to obtain a disproportionate amount of network bandwidth; even a small number of P2P users (less than 0.5% of users) can consume over 90% of the network bandwidth.
  • P2P protocols (and users) are very good Continue reading

The Docker Modernize Traditional Apps (MTA) Program Adds Microsoft Azure Stack

In April of this year, Docker announced the Modernize Traditional Apps (MTA) POC program with partners Avanade, Booz Allen, Cisco, HPE and Microsoft. The MTA program is designed to help IT teams flip the 80% maintenance to 20% innovation ratio on it’s head. The combination of Docker Enterprise Edition (EE), services and infrastructure into a turnkey program delivers portability, security and efficiency for the existing app portfolio to drive down total costs and make room for innovation like cloud strategies and new app development. The program starts by packaging of existing apps into isolated containers, providing the opportunity to migrate them to new on-prem or cloud environments, without any recoding.

 

Docker customers have already been taking advantage of the program to jumpstart their migration to Azure and are experiencing dramatically reduced deployment and scaling times — from weeks to minutes —  and cutting their total costs by 50% or more.

 

The general availability of Microsoft Azure Stack provides IT with the ability to manage their datacenters in the same way they manage Azure. The consistency in hybrid cloud infrastructure deployment combined with consistency in application packaging, deployment and management only further enhance operational efficiency. Docker is pleased Continue reading

History Of Networking – Tony Li – BGP

Tony Li has had a distinguished career working as a networking software architect at some of the largest networking vendors in the world. In this episode of Network Collective, Tony joins us to discuss his involvement in the creation and implementation of BGP, the routing protocol that enables the Internet.

Links, FYI:

BGP Napkin

The image above is a capture of the original BGP design, sketched on two napkins by Kirk Lougheed of Cisco and Yakov Rekhter of IBM in 1989.

RFC 4271 – BGP


Tony Li
Guest
Russ White
Host
Donald Sharp
Host
Eyvonne Sharp
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post History Of Networking – Tony Li – BGP appeared first on Network Collective.

History Of Networking – Tony Li – BGP

Tony Li has had a distinguished career working as a networking software architect at some of the largest networking vendors in the world. In this episode of Network Collective, Tony joins us to discuss his involvement in the creation and implementation of BGP, the routing protocol that enables the Internet.

Links, FYI:

BGP Napkin

The image above is a capture of the original BGP design, sketched on two napkins by Kirk Lougheed of Cisco and Yakov Rekhter of IBM in 1989.

RFC 4271 – BGP


Tony Li
Guest
Russ White
Host
Donald Sharp
Host
Eyvonne Sharp
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post History Of Networking – Tony Li – BGP appeared first on Network Collective.

Nvidia accelerates the path to AI for IoT, hyperscale data centers

It’s safe to say the Internet of Things (IoT) era has arrived, as we live in a world where things are being connected at pace never seen before. Cars, video cameras, parking meters, building facilities and anything else one can think of are being connected to the internet, generating massive quantities of data.The question is how does one interpret the data and understand what it means? Clearly trying to process this much data manually doesn’t work, which is why most of the web-scale companies have embraced artificial intelligence (AI) as a way to create new services that can leverage the data. This includes speech recognition, natural language processing, real-time translation, predictive services and contextual recommendations. Every major cloud provider and many large enterprises have AI initiatives underway.To read this article in full or to leave a comment, please click here

Nvidia accelerates the path to AI for IoT, hyperscale data centers

It’s safe to say the Internet of Things (IoT) era has arrived, as we live in a world where things are being connected at pace never seen before. Cars, video cameras, parking meters, building facilities and anything else one can think of are being connected to the internet, generating massive quantities of data.The question is how does one interpret the data and understand what it means? Clearly trying to process this much data manually doesn’t work, which is why most of the web-scale companies have embraced artificial intelligence (AI) as a way to create new services that can leverage the data. This includes speech recognition, natural language processing, real-time translation, predictive services and contextual recommendations. Every major cloud provider and many large enterprises have AI initiatives underway.To read this article in full or to leave a comment, please click here

Plans for First Exascale Supercomputer in U.S. Released

This morning a presentation filtered from the Department of Energy’s Office of Science showing the roadmap to exascale with a 2021 machine at Argonne National Lab.

This is the Aurora machine, which had an uncertain future this year when its budgetary and other details were thrown into question. We understood the deal was being restructured and indeed it has. The system was originally slated to appear in 2018 with 180 petaflop potential. Now it is 1000 petaflops, an exascale capable machine, and will be delivered in 2021—right on target with the projected revised plans for exascale released earlier this

Plans for First Exascale Supercomputer in U.S. Released was written by Nicole Hemsoth at The Next Platform.