This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
While cloud computing has proven to be beneficial for many organizations, IT departments have been slow to trust the cloud for business-critical Microsoft SQL Server workloads. One of their primary concerns is the availability of their SQL Server, because traditional shared-storage, high-availability clustering configurations are not practical or affordable in the cloud.
Amazon Web Services and Microsoft Azure both offer service level agreements that guarantee 99.95% uptime (fewer than 4.38 hours of downtime per year) of IaaS servers. Both SLAs require deployment in two or more AWS Availability Zones or Azure Fault Domains respectively. Availability Zones and Fault Domains enable the ability to run instances in locations that are physically independent of each other with separate compute, network, storage or power source for full redundancy. AWS has two or three Availability Zones per region, and Azure offers up to 3 Fault Domains per “Availability Set.”
To read this article in full or to leave a comment, please click here
Innovation is the cornerstone for sustained business success, and given how much innovation relies on technology these days, IT has to play a vital role in making it happen. Even so, Brocade's 2015 Global CIO Study found that more than half of CIO respondents spent around 1,000 hours a year reacting to unexpected problems such as data loss, network downtime and application access. With that much time spent fighting fires, how is the average CIO supposed to find the time to innovate?
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
SSL/TLS encryption is widely used to secure communications to internal and external servers, but can blind security mechanisms by preventing inspection of network traffic, increasing risk. In fact, Gartner predicts that in 2017 more than half of network attacks targeting enterprises will use encrypted traffic to bypass controls.
With attackers preying on the security gaps created by encrypted traffic, let’s examine the five most common network traffic inspection errors made today:
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Secure Shell (SSH) is a tool for secure computer system management, file transfers and automation in computer and telecommunications systems. The Secure Shell protocol ships standard with every Unix, Linux and Mac system and is also widely used on Windows (Microsoft has announced plans to make it a standard component of Windows). It is also included on practically every router and mobile network base station. In many ways, the connected world as we know it runs on Secure Shell. Its keys are ubiquitously used for automating access over a network, and modern systems could not be cost-effectively managed without it.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Companies are securing more users who are accessing more applications from more places through more devices than ever before, and all this diversity is stretching the current landscape of identity and access management (IAM) into places it was never designed to reach. At the same time, security has never been more paramount—or difficult to ensure, given today’s outdated and overly complex legacy identity systems. I call this the “n-squared problem,” where we’re trying to make too many hard-coded connections to too many sources, each with its own protocols and requirements.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Virtualization is a mature technology but if you don’t have a virtualization wizard on staff managing the environment can be a challenge. Benefits such as flexibility, scalability and cost savings can quickly give way to security risks, resource waste and infrastructure performance degradation, so it is as important to understand common virtual environment problems and how to solve them.
The issues tend to fall into three main areas: virtual machine (VM) sprawl, capacity planning and change management. Here’s a deeper look at the problems and what you can do to address them:
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Artificial intelligence (AI) – when computers behave like humans – is no longer science fiction. Machines are getting smarter and companies across the globe are beginning to realize how they can leverage AI to improve consumer engagement and customer experience.
Gartner research indicates that in a few years 89% percent of businesses will compete mainly on customer experience. Within five years consumers will manage 85% of their relationships with an enterprise without interacting with a human – moving to the “DIY” customer service concept.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
The cost and scalability benefits of cloud computing are appealing, but cloud applications are complex. This is because they typically have multiple tiers and components that utilize numerous technologies; as a result, applications can end up scattered across a variety of execution environments. To ensure successful cloud application deployment and management, the key is to use application-defined automation tools.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
In major metropolitan areas and smaller cities alike, governments are adopting software-defined networking (SDN) and network function virtualization (NFV) to deliver the agility and flexibility needed to support adoption of “smart” technologies that enhance the livability, workability and sustainability of their towns.
Today there are billions of devices and sensors being deployed that can automatically collect data on everything from traffic to weather, to energy usage, water consumption, carbon dioxide levels and more. Once collected, the data has to be aggregated and transported to stakeholders where it is stored, organized and analyzed to understand what’s happening and what’s likely to happen in the future.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Once only used by the likes of Google, Amazon and Facebook, many industries are now adopting NoSQL database technology for crucial business applications, replacing their relational database deployments to gain flexibility and scalability. Here are 10 enterprise use cases best addressed by NoSQL:
* Personalization. A personalized experience requires data, and lots of it – demographic, contextual, behavioral and more. The more data available, the more personalized the experience. However, relational databases are overwhelmed by the volume of data required for personalization. In contrast, a distributed NoSQL database can scale elastically to meet the most demanding workloads and build and update visitor profiles on the fly, delivering the low latency required for real-time engagement with your customers.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
If you’re looking to add Solid-State Drives to your storage environment you want to avoid under-provisioning to ensure performance and scalability, but to meet cost goals and avoid unnecessary spending you need to avoid over-provisioning. Workload profiling can help you achieve the critical balance.
A recent survey of 115 Global 500 companies by GatePoint Research and sponsored by Load DynamiX showed that 65% of storage architects say they are doing some sort of pre-deployment testing before making their investment decision. Alarmingly, only 36% understand their application workload I/O profiles and performance requirements. They don’t know what workload profiling is and how it can be used to accurately evaluate vendors against the actual applications that will be running over their particular storage infrastructure.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
As organizations turn to containers to improve application delivery and agility, the security ramifications of the containers and their contents are coming under increased scrutiny.
Container providers Docker, Red Hat and others are moving aggressively to reassure the marketplace about container security. In August Docker delivered Docker Content Trust as part of the Docker 1.8 release. It uses encryption to secure the code and software versions running in Docker users’ software infrastructures. The idea is to protect Docker users from malicious backdoors included in shared application images and other potential security threats.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Many organizations are turning to NoSQL for its ability to support Big Data’s volume, variety and velocity, but how do you know which one to chose?
A NoSQL database can be a good fit for many projects, but to keep down development and maintenance costs you need to evaluate each project’s requirements to make sure specialized criteria are addressed. Keep in mind that it is not just a question of being able to develop the application specified, it also means being able to easily manage and support applications with the potential for dramatic growth in scope and size in production for many years. One of my customers doubled the size of their business 12 times in less than 4 years.
To read this article in full or to leave a comment, please click here
If you’re thinking about migrating a highly sensitive application to the cloud, consider using HIPAA requirements as a way to vet potential providers.
Federal law requires organizations dealing with private health information to adhere to strict security guidelines defined by the Health Insurance Portability and Accountability Act (HIPAA). Given that HIPAA regulations are an excellent risk-management strategy, non-healthcare companies can use a HIPAA-compliant strategy to protect sensitive information like credit card numbers and private customer information.
HIPAA compliance requires businesses to “maintain reasonable and appropriate administrative, technical, and physical safeguards for protecting e-PHI (Electronic Personal Health Information),” but this could apply to any dataset. At a high level, here’s what you get with HIPAA compliance:
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
From Target to Ashley Madison, we’ve witnessed how interconnections with third-party vendors can turn an elastic environment -- where devices, services and apps are routinely engaging and disengaging -- into a precarious space filled with backdoors for a hacker to infiltrate an enterprise’s network. Here are the top five threats related to working with 3rd parties:
Threat #1 - Shared Credentials. This is one of the most dangerous authentication practices we encounter in large organizations. Imagine a unique service, not used very frequently, requiring some form of credential-based authentication. Over time, the users of this service changes, and for convenience considerations, a single credential is often used. The service is now accessed from multiple locations, different devices and for different purposes. It takes just one clumsy user to fall victim to one {fill in the credential harvesting technique of your choice}, to compromise this service and any following user of that service.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
When the Amazon Web Services platform failed recently some of the internet’s biggest sites -- including Netflix and Tinder – suffered extended outages. The culprit? AWS's NoSQL database DynamoDB, where increased error rates led to increased errors and latency in more than 20 AWS services.
These and other sites wouldn’t have had a problem if they used hybrid hosting, the best way to architect modern apps. Hybrid hosting lets businesses set up their databases on dedicated servers, put their front-end Web apps in the cloud, then tie everything together with a single click.
To read this article in full or to leave a comment, please click here
Stipends are a way for businesses to reimburse employees for a portion of their wireless costs and, if implemented properly, address these common issues: cost, eligibility, control and taxes. Here’s how:
* Costs. When businesses talk about costs, they generally are referring to either time or money. And companies opting to use expense reports for stipends will find the task occupies a good bit of both. It’s time-consuming for accounting departments to sort through individual expense reports and issue payments only after an employee’s usage has been verified. It’s no surprise, then, that an Aberdeen Group study suggests each expense report costs $18 to process. Compounding those costs, companies opting for this method will issue hundreds or even thousands of payments each month, so the benefits that attend stipends can be quickly outweighed.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Over the past half decade, the big data flame has spread like wildfire throughout the enterprise, and the IT department has not been immune. The promise of data-driven initiatives capable of transforming IT from a support function to a profit center has sparked enormous interest.
After all, datacenter scale, complexity, and dynamism has rapidly outstripped the ability of siloed, infrastructure-focused IT operations management to keep pace. IT big-data analytics has emerged as the new IT operations-management approach of choice, promising to make IT smarter and leaner. Nearly all next-generation operational intelligence products incorporate data analytics to some degree. However, as many enterprises are learning the hard way, big data doesn’t always result in success.
To read this article in full or to leave a comment, please click here
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Vulnerability risk management has re-introduced itself as a top challenge – and priority – for even the most savvy IT organizations. Despite the best detection technologies, organizations continue to get compromised on a daily basis. Vulnerability scanning provides visibility into potential land mines across the network, but often just results in data tracked in spreadsheets and independent remediation teams scrambling in different directions.
To read this article in full or to leave a comment, please click here
Firewalls are an essential part of network security, yet Gartner says 95% of all firewall breaches are caused by misconfiguration. In my work I come across many firewall configuration mistakes, most of which are easily avoidable. Here are five simple steps that can help you optimize your settings:
* Set specific policy configurations with minimum privilege. Firewalls are often installed with broad filtering policies, allowing traffic from any source to any destination. This is because the Network Operations team doesn’t know exactly what is needed so start with this broad rule and then work backwards. However, the reality is that, due to time pressures or simply not regarding it as a priority, they never get round to defining the firewall policies, leaving your network in this perpetually exposed state.
To read this article in full or to leave a comment, please click here