Wi-Fi backers warn about unlicensed LTE while Ericsson claims speed boost

The Wi-Fi Alliance warned that LTE on unlicensed frequencies could interfere with Wi-Fi and said it plans to collaborate with the 3GPP cellular standards group to help prevent that.Mobile operators are starting to explore the use of the unlicensed 5GHz band for LTE even though many Wi-Fi networks rely on those frequencies. On Tuesday, Ericsson announced it’s testing unlicensed LTE with Qualcomm and that SK Telecom, T-Mobile USA and Verizon are interested in the technology.Most countries set aside large portions of the 5GHz band for use without a license, and Wi-Fi has become a major user of that spectrum. Mobile operators are allowed to use the band even though they have their own licensed frequencies, but LTE wasn’t developed to coexist with other networks in that kind of environment.To read this article in full or to leave a comment, please click here

Apple’s $710 billion market cap sets new record

At the close of trading on Tuesday, shares of Apple rested at $122.02 a share. Not only did the closing price represent an all-time stock high for the company, it also gave Apple a market cap of $710.74 billion. As a result, Apple is now the first U.S. company to close out the trading day with a market cap over $700 billion.Highlighting the financial behemoth that is Apple, here's how Apple's own market cap stacks up against some other notable tech heavyweights: Amazon has a market cap of $172 billion, Google has a market cap of $358 billion, while Microsoft has a market cap of $346 billion.In a broad sense, Tim Cook clearly knows what he's doing. Taking a closer look at Apple's stock price, however, one can't help but mention Apple's capital return program. When Apple began issuing dividends and engaging in stock buybacks, the company's share price saw an immediate boost. For starters, Apple as a dividend stock instantly became more appealing to large funds. More specifically, a number of large mutual funds are governed by rules which only allow them to invest in dividend paying stocks. Second, Apple's stock buyback program helps Continue reading

Using Docker with Vagrant

As part of my ongoing effort to create tools to assist others in learning some of the new technologies out there, I spent a bit of time today working through the use of Docker with Vagrant. Neither of these technologies should be new to my readers; I’ve already provided quick introductory posts to both (see here and here). However, using these two together may provide a real benefit for users who are new to either technology, so I’d like to take a bit and show you how to use Docker with Vagrant.

Background

Vagrant first started shipping with a Docker provider as part of the core product in version 1.6 (recall that Vagrant uses the concept of providers to support multiple backend virtualization solutions). Therefore, if you’ve installed any recent version of Vagrant, you already have the Docker provider as part of your Vagrant installation.

However, while you may have the Docker provider as part of Vagrant, you still need Docker itself (just like if you have the VMware provider for Vagrant, you still need the appropriate VMware product—VMware Fusion on the Mac or VMware Workstation on Windows/Linux) in order to provide the functionality Vagrant will consume. Continue reading

Native TFTP and FTP Server in OSX

As a System Engineer, I do occasionally have to do real field work. When that happens, having access to a TFTP and FTP server is sometimes required. Although the [lack of] UI makes the use counterintuitive, these tools are available in OSX. This post includes the commands required to enable, confirm, and disable both TFTP and FTP in the native Mac environment.

TFTP Server

//load the TFTP daemon (typically starts automatically)
sudo launchctl load -F /System/Library/LaunchDaemons/tftp.plist

//confirm that TFTP is listening (netstat)
netstat -atp UDP | grep tftp
--output--
udp6       0      0  *.tftp                 *.*   //IPv6 Listening                         
udp4       0      0  *.tftp                 *.*   //IPv4 Listening     

//unload the TFTP daemon
sudo launchctl unload -F /System/Library/LaunchDaemons/tftp.plist

//confirm that TFTP is no longer listening (netstat)
netstat -atp UDP | grep tftp
--no output--

TFTP Caveats

  • Default Directory is /private/tftpboot
  • Copying a file from a device to the TFTP server requires it be “pre” created (Hint: sudo touch /private/tftpboot/<filename>)
  • File permissions typically need to be modified (Hint: sudo chmod 766 /private/tftpboot/*)
  • I just use my TFTP directory for transient file transfers

FTP Server

//load the FTP daemon (typically starts automatically)
sudo launchctl load -w /System/Library/LaunchDaemons/ftp.plist

//confirm that FTP is listening (netstat)
netstat  Continue reading

How to set up a new user on your Amazon AWS server

I recently set up a free Amazon AWS server. As I experimented with it, I installed a GUI desktop. Then I encountered some issues that I eventually resolved by creating a new user with its own password and then using that user for the rest of my activities.

For my own reference, and in the hope others will find it useful, here is the procedure I followed:

  1. Create a new userid, with password
  2. Add the new user to the *sudoers* file
  3. Install the AWS server’s public key for the new user
  4. Log in as the new user

I posted the details in my blog post, below.

Why do we need a password?

The default ubuntu userid does not have a password. The Amazon AWS documentation on managing users recommends creating new users with password disabled. So, why set up a new user with a password?

After installing a GUI desktop, you need to a use a password to authenticate operations performed by GUI software such as Ubuntu Software Center. I did not see any problems caused by configuring a user password. I found it was best to work in a “normal” Linux user account that has a password.

Create a Continue reading

Apple’s massive solar farm could power its entire California operations

Apple is investing in a vast solar plant in Northern California that will generate as much electricity as the company uses to power all its operations in the state.Apple will invest $850 million in the plant through a partnership with First Solar, CEO Tim Cook said Tuesday. It will cover 1,300 acres—equal to about 1,000 football fields—in Monterey County, about an hour south of Apples Silicon Valley headquarters.The plant will generate enough energy that it could power Apples entire operations in California, including its data center, retail stores and offices. That’s also enough energy to power 15,000 California homes, Cook said.It doesn’t mean Apple’s stores and offices will consume power directly from the plant. But the investment allows Apple to lock in a low, fixed rate for renewable energy, and probably also obtain renewable energy certificates to offset its carbon foot print.To read this article in full or to leave a comment, please click here

FCIP – The Beginning

FCIP is notably a part of the CCIE Data Center lab exam blueprint. It is also a sticking point for a lot of candidates who have not done a whole lot on the storage networking side. Luckily FCIP has many correlations to the modern-day Ethernet networking that we all know and love, as it’s really just another tunneling technology! After some thought, I have decided to break this down into 2 blog posts. This one will cover FCIP basics, and another that will cover some more advanced FCIP options that you might have to use during the CCIE lab examination.

FCIP is used for extending a Fibre Channel (FC) network over an IP backbone. It encapsulates FC in IP so that SCSI and non-SCSI FC frames can be sent over an IP network. Normally most organizations are not going to do this simply for the sake of extending their FC network (why extend a lossless network over a lossy medium?), but rather for backup or replication jobs that need to occur between storage systems that are across some geographical distance. A typical deployment scenario is shown below:

20141229_01

Here we have two SANs separated by an IP network. Now, the Continue reading

What Intel’s $300 million diversity pledge really means

As controversy flares over workforce diversity in tech, Intel’s Rosalind Hudnell is working on an ambitious plan to spark change that could forever alter hiring practices at IT companies.She realizes, though, that change has to start from within the company, and that it won’t come overnight. Hudnell, Intel’s chief diversity officer, is responsible for implementing the company’s much-publicized US$300 million initiative to bring more women and under-represented minorities into its workforce by 2020. The challenges are many.The effort comes as an intense debate rages over what’s perceived as the technology industry’s sexist culture. Microsoft’s CEO Satya Nadella, for example, apologized after igniting a firestorm when he said in a public interview that not asking for pay raises is “good karma” for women.To read this article in full or to leave a comment, please click here

Using Docker with Vagrant

As part of my ongoing effort to create tools to assist others in learning some of the new technologies out there, I spent a bit of time today working through the use of Docker with Vagrant. Neither of these technologies should be new to my readers; I’ve already provided quick introductory posts to both (see here and here). However, using these two together may provide a real benefit for users who are new to either technology, so I’d like to take a bit and show you how to use Docker with Vagrant.

Background

Vagrant first started shipping with a Docker provider as part of the core product in version 1.6 (recall that Vagrant uses the concept of providers to support multiple backend virtualization solutions). Therefore, if you’ve installed any recent version of Vagrant, you already have the Docker provider as part of your Vagrant installation.

However, while you may have the Docker provider as part of Vagrant, you still need Docker itself (just like if you have the VMware provider for Vagrant, you still need the appropriate VMware product—VMware Fusion on the Mac or VMware Workstation on Windows/Linux) in order to provide the functionality Vagrant will consume. Continue reading

The meaning of Cloud

The term “Cloud” refers to a software development and delivery methodology that consists of decomposing applications into multiple services (a.k.a. “micro-services”) such that each service can be made resilient and scaled horizontally, by running multiple instances of each service. “Cloud” also implies a set of methodologies for the application delivery (how to host the application) and application management (how to ensure that each component is behaving appropriatly). The name, “Cloud”, seems to only capture the delivery piece of the equation, which is the proverbial tip of the iceberg.
An example that would help us break down the jargon into something a bit more concrete: a simple web application that lets a user add entries to a database table (e.g. “customers”) and perform some simple queries over this table. This is what is known as a CRUD application, from the initials of Create, Read, Update, Delete.
The “classic” version of this application would be a VisualBasic/Access (in the pre-Web days), .NET/SQLServer or Ruby On Rails/MySQL application. The software component is responsible to generate a set of forms/web pages for the user to input its data, execute some validation and access the database. In most application development frameworks (e.g. RoR), this example can be made to Continue reading

Microsoft fixes IE memory problems

Internet Explorer is getting major repairs, as Microsoft has issued 41 patches to fix memory vulnerabilities in its browser.The Internet Explorer patches are part of the company's routine monthly release of security and bug fixes for its software products, called "Patch Tuesday." Microsoft Office and both the desktop and server editions of Windows are also getting fixes in this batch.Overall, Microsoft issued patches to cover 56 different vulnerabilities, which are bundled into nine separate security bulletins.Three of the bulletins are marked as critical, meaning they fix vulnerabilities that could be exploited by malicious attackers without user intervention. System administrators should tend to critical vulnerabilities as quickly as possible. These bulletins cover Internet Explorer and both the server and desktop editions of Windows.To read this article in full or to leave a comment, please click here

File storage service Rapidshare to shutter in wake of legal woes

After years of legal trouble, the once-popular online file storage and sharing company Rapidshare is closing up shop.In a message posted to its website Tuesday, Rapidshare said it will stop active service on March 31. "We strongly recommend all customers to secure their data. After March 31st, 2015 all accounts will no longer be accessible and will be deleted automatically," the message said.MORE ON NETWORK WORLD: 12 Free Cloud Storage options It did not say why it is shutting down. However, legal troubles related to copyright infringement have plagued the company for years.To read this article in full or to leave a comment, please click here

Google hands out free Drive space for running quick security checklist

Google today said it would give users of its Google Drive cloud storage service an additional 2GB if they ran a three-step security checkup.The offer was in honor of "Safer Internet Day," a project begun in 1999 and co-funded by the European Union."As our way of saying thanks for completing the checkup by February 17, we'll give you a permanent 2 gigabyte bump in your Google Drive storage plan," wrote Alex Vogenthaler, group product manager of Google Drive, in a blog post Tuesday.Users of Google Apps for Work and Google Apps for Education are not eligible for the extra 2GB.To read this article in full or to leave a comment, please click here

FCC commish knocks Net neutrality plan, warns of stealthy regulations

The chairman of the U.S. Federal Communications Commission has undersold the amount of intrusive new regulations his net neutrality proposal will bring to the Internet and to broadband providers, a Republican commissioner said Tuesday.The net neutrality proposal from FCC Chairman Tom Wheeler would bring “adverse consequences to entire Internet economy,” Commissioner Ajit Pai said during a press conference. “The imposition of these heavy-handed ... regulations is going to present onerous burdens on everybody, across the entire landscape.”The proposal would allow the FCC to define just and reasonable prices for broadband service and to impose in the future common-carrier telecom regulations, like requiring providers to share their networks with competitors, the commissioner said.To read this article in full or to leave a comment, please click here

Rolling out Change

We all know that “Change is Hard.” But often we, as engineers, focus on the technical aspects of that change. How do I minimise customer impact while upgrading those routers? How can I migrate customer data safely to the new system? But we can forget about the wider implications of what we’re doing. If we do, we may struggle to get our changes implemented, or see poor take-up of new systems.

Why Can’t I Make That Change?

I was talking to an engineer who had planned a huge configuration management implementation. Everything had been manually configured in the past, but this was hitting scale issues. So he had worked for months on a fully automated process. It was going to be amazing. It would configure everything, across all systems and applications. Standards enforced, apps deployments done in a repeatable way, etc. It was going to be a thing of beauty. No-one would ever need to login to a server again. Total automation.

It was all tested, and was just waiting for approval to put it into production. But for some reason, no-one was willing to give the go-ahead to roll it out. Weeks were dragging by, and things were going Continue reading

Relevance of SDN in Cloud Networking

SDN (Software Defined Networking) is finally becoming clearer. It is not “Still Don’t Know” nor is it a specific overlay controller. Simply put, it is an open and programmable way to build networks for customers looking at utilizing hybrid combinations of public and private cloud access.

We are witnessing a shift from multi-tier oversubscribed legacy enterprise networks to two-tier leaf-spine or single-tier Spline™ cloud networks with east- west traffic patterns scaling across thousands of servers. Arista was the first to introduce this new architectural “leaf-spine” approach for cloud-based networks and five years later others are still attempting to mimic. Lets review some practical examples.

Facebook: Take an important and familiar social networking application, Facebook. Their public information shows that they deploy a memcache architecture, which allowed them to reduce the user access time to half a millisecond by using fewer network tiers, resulting in lower application latency. As we log into Facebook, the single login request triggers thousands of look-ups on databases and memcache servers. Legacy enterprise multi-tiered networks would result in delayed look-ups and would negatively impact the user experience and interest in a significant way.

Amazon: Shopping couldn’t be easier than online on your favorite site. Have you Continue reading

Arista says it can route VXLAN too, just like Cisco

Cisco’s claim that its Nexus 9000 leaf switches have a VXLAN routing advantage over those based on Broadcom Trident II silicon is meeting some resistance. In announcing support for the BGP EVPN control plane for VXLAN on its Nexus 9000 switches, Cisco said its Nexus 9300 leaf switches, equipped with Cisco’s custom ALE ASIC, can route VXLAN overlay traffic, which the company touts as a benefit over Broadcom Trident II-based platforms from competitors.To read this article in full or to leave a comment, please click here

Scaling Overlay Networks: Distributed Data Plane

Thou Shalt Have No Chokepoints” is one of those simple scalability rules that are pretty hard to implement in real-life products. In the Distributed Data Plane part of Scaling Overlay Networks webinar I listed data plane components that can be easily distributed (layer-2 and layer-3 switching), some that are harder to implement but still doable (firewalling) and a few that are close to mission-impossible (NAT and load balancing).

How an outsourcing contract can boost IT service provider performance

IT outsourcing customers are increasingly looking for their service providers not just to cut technology costs or improve process efficiency, but to deliver business results. But getting that kind of business value from IT suppliers has proven to be a challenge.The secret getting technology providers on board with delivering innovation may actually be the terms of the IT outsourcing deals. “Most IT services buyers seek compliance, not improved supplier performance” from their contracts, says Brad Peterson, partner in the Chicago office of law firm Mayer Brown. “That’s all that’s necessary for most it services categories. However, IT buyers can create substantially more value by using incentives to deliver innovation, analytics, data security, mobility, cloud and other fast-changing it services categories.”To read this article in full or to leave a comment, please click here