Most financial service firms, which includes banking and insurance companies, are engaged in a big data project to increase the pace of innovation and uncover game-changing business outcomes. The pressing challenge now is how to drive more continuous value and unearth opportunities more rapidly.
No matter where you might be in your big data journey, the following three-step approach to integrating big data into an analytics strategy can lead to success:
To drive continuous and transformational improvements through big data-driven analytics projects, business units – IT, marketing, risk, compliance or finance, for example – should agree on and outline a mutually beneficial business objective. For instance, driving a better customer experience or improving customer value management. While developing the common objective, financial services firms should also determine the aligned and desired outcomes, such as decreasing fraud and offering more personalized services to customers in real-time.
To read this article in full or to leave a comment, please click here
We just installed an MX in the lab for a customer type-approval test (TAT) and none of the cards came online.
The output of “show chassis hardware” showed that there were FPCs installed, but not the MICs that were in them:
[email protected]> show chassis hardware Hardware inventory: Item Version Part number Serial number Description Chassis JN1249BDBAFA MX960 Midplane REV 04 750-047849 ACRD2400 Enhanced MX960 Backplane FPM Board REV 03 710-014974 CADE9287 Front Panel Display PDM Rev 03 740-013110 QCS181650BM Power Distribution Module PEM 0 Rev 11 740-027760 QCS1806N0MP PS 4.1kW; 200-240V AC in PEM 1 Rev 11 740-027760 QCS1806N0SK PS 4.1kW; 200-240V AC in PEM 2 Rev 11 740-027760 QCS1806N07S PS 4.1kW; 200-240V AC in PEM 3 Rev 11 740-027760 QCS1812N02D PS 4.1kW; 200-240V AC in Routing Engine 0 REV 01 740-051822 9013061577 RE-S-1800x4 Routing Engine 1 REV 01 740-051822 9013056762 RE-S-1800x4 CB 0 REV 01 750-055976 CACX9090 Enhanced MX SCB 2 CB 1 REV 01 750-055976 CACZ4497 Enhanced MX SCB 2 CB 2 REV 01 750-055976 CADA1721 Enhanced MX SCB 2 FPC 0 REV 05 750-044444 CAAM5562 MPCE Type 2 3D P CPU FPC 1 REV 35 750-028467 CAAP9738 MPC 3D 16x Continue reading
Amazon Web Services was (AFAIK) one of the first products that introduced availability zones – islands of infrastructure that are isolated enough from each other to stop the propagation of failure or outage across their boundaries.
Not surprisingly, multiple availability zones shouldn’t rely on a central controller (as Amazon found out a few years back), and there are only few SDN controller vendors that are flexible enough to meet this requirement. For more details, watch the free Availability Zones video on my web site (part of Scaling Overlay Virtual Networking webinar).
During my latest Kubernetes lab rebuild, I noticed some significant differences in the some functions of Kubernetes cluster. I’ve done my best to go back and update the previous blog posts with notes and pointers to make this clear. However – going forward please consider my GitHub Salt repo as the ‘source of truth’ for the Kubernetes configuration files and current build process. I’ll be updating that regularly as I continue to optimize the Salt config and add onto the Kubernetes configuration. A couple of big hitters I want to call out as differences between my initial build and this one…
cAdvisor is now part of kubelet
That’s right! We no longer need to have a separate manifest and container for cAdvisor. We can see that any host running the kubelet process is exposing port 4194…
kube-proxy and kubelet no longer use etcd
In my first build, both the kubelet and kube-proxy service relied on talking directly to etcd to interact with the cluster. The associated configs looked like…
The newest systemd service configuration looks like this…
So what’s happened here is the cluster communication has moved to Continue reading
So what happens when you submit a PR, but then you want to change it? After reviewing my proposed changes from my last post, it was decided that I should take a different approach. The changes I needed to make weren’t substantial, and were in the same spirit as the initial PR, so I decided that updating made more sense then starting over. All you have to do is update the code and push another commit to the branch. Let’s assume we’ve made the changes we want to our code again. Let’s verify that Git sees these updates…
Yep – Looks good so far. Now we need to add these files to the commit just like last time…
git add .
So now the files are ready to be committed, let’s go ahead and make a commit…
git commit -m "Updated the ENV variables and the way they are set"
Perfect – So now let’s check and see if our remote (GitHub) is still defined…
All looking good – So now all we need to do is push the commit…
git push -u origin fluentd-elasticsearch-kibanafix
Let’s go check out our PR Continue reading