If you think about a system like Procella that’s combining transactional and analytic workloads on top of a cloud-native architecture, extensions to SQL for streaming, dataflow based materialized views (see e.g. Naiad, Noria, Multiverses, and also check out what Materialize are doing here), the ability to use SQL interfaces to query over semi-structured and unstructured data, and so on, then a picture begins to emerge of a unifying large-scale data platform with a SQL query engine on top that addresses all of the data needs of an organisation in a one-stop shop. Except there’s one glaring omission from that list: handling all of the machine learning use cases.
Machine learning inside a relational database has been done before, most notably in the form of MADlib, which was integrated into Greenplum during my time at Pivotal. The Apache MADLib project is still going strong, and the recent (July 2019) 1.16 release even includes some support for deep learning.
To make that vision of a one-stop shop for all of an organisation’s data Continue reading
In a previous tutorial we have successfully installed ClearOS on QEMU VM in a gateway mode. At the end of the tutorial we have installed several apps from ClearOS marketplace. These apps enhance gateway functionality, however we have not tested them yet. Therefore, this tutorial goes further and we are going to test some services offered by ClearOS apps. In order to do it, we will connect ClearOS QEMU appliance into a GNS3 topology.
Our ClearOS QEMU instance is configured with two guest network cards (Picture 1). The first guest interface ens3 has assigned the LAN role and it is configured with the IP address 192.168.1.254/24. This is the IP address a web server is listening on, the port 81. The entire ClearOS management will be done via web browser using the url https://192.168.1.254:81.
Picture 1 - Network Interfaces Configuration During ClearOS Installation
The second guest interface ens4 has assigned External role and its IP address is assigned from DHCP server. DHCP server is running on SOHO router with the IP address 172.17.100.1/16 (Picture 2).
Picture 2 - Network Topology
GNS3 itself connects the second guest interface ens4 of ClearOS gateway Continue reading
There’s little question that software-defined wide-area networks (SD-WANs) have taken off, as companies look for increased network resiliency and control. But there’s still significant confusion about SD-WAN, including some benefits that are more myth than reality.In this post, we’ll explore three common misconceptions that surround SD-WAN, starting with what is probably the most important one.Misconception #1: SD-WAN will replace services such as MPLS
SD-WAN doesn’t necessary replace any existing network service, be it MPLS, broadband Internet, or anything else. In fact, it requires some kind of network service to work at all.To read this article in full, please click here
IBM has rolled out a new generation of mainframes – the z15 – that not only bolsters the speed and power of the Big Iron but promises to integrate hybrid cloud, data privacy and security controls for modern workloads.On the hardware side, the z15 mainframe systems ramp up performance and efficiency. For example IBM claims 14 percent more performance per core, 25 percent more system capacity, 25percent more memory, and 20 percent more I/O connectivity than the previous iteration, the z14 system. [ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ]
IBM also says the system can save customers 50 percent of costs over operating x86-based servers and use 40 percent less power than a comparable x86 server farm. And the z15 has the capacity to handle scalable environments such as supporting 2.4 million Docker containers on a single system.To read this article in full, please click here
IBM has rolled out a new generation of mainframes – the z15 – that not only bolsters the speed and power of the Big Iron but promises to integrate hybrid cloud, data privacy and security controls for modern workloads.On the hardware side, the z15 mainframe systems ramp up performance and efficiency. For example IBM claims 14 percent more performance per core, 25 percent more system capacity, 25percent more memory, and 20 percent more I/O connectivity than the previous iteration, the z14 system. [ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ]
IBM also says the system can save customers 50 percent of costs over operating x86-based servers and use 40 percent less power than a comparable x86 server farm. And the z15 has the capacity to handle scalable environments such as supporting 2.4 million Docker containers on a single system.To read this article in full, please click here
Only one more week until AnsibleFest 2019 comes to Atlanta! We talked with Track Lead Sean Cavanaugh to learn more about the Technical Deep Dives track and the sessions within it.
Who is this track best for?
You've written playbooks. You've automated deployments. But you want to go deeper - learn new ways you could use Ansible you haven't thought of before. Extend Ansible for new functionality. Dig deep into new use cases. Then Tech Deep Dives is for you. This track is best suited for someone with existing Ansible knowledge and experience that already knows the nomenclature. It is best for engineers who want to learn how to take their automation journey to the next level. This track includes multiple talks from Ansible Automation developers, and it is your chance to ask them direct questions or provide feedback.
What topics will this track cover?
This track is about automation proficiency. Talks range from development and testing of modules and content to building and operationalizing automation to scale for your enterprise. Think about best practices, but then use those takeaways to leverage automation for your entire organization.
There has been a quiet revolution in storage happening and for once, it has not been led by the decades-old companies catering exclusively to on-prem deployments. …
We sat down recently with InterSystems, our partner and customer, to talk about how they deliver an enterprise database at scale to their customers. InterSystems’s software powers mission-critical applications at hospitals, banks, government agencies and other organizations.
We spoke with Joe Carroll, Product Specialist, and Todd Winey, Director of Partner Programs at InterSystems about how containerization and Docker are helping transform their business.
Here’s what they told us. You can also catch the highlights in this 2 minute video:
On InterSystems and Enterprise Databases…
Joe Carroll: InterSystems is a 41 year old database and data platform company. We’ve been in data storage for a very long time and our customers tend to be traditional enterprises — healthcare, finance, shipping and logistics as well as government agencies. Anywhere that there’s mission critical data we tend to be around. Our customers have really important systems that impact people’s lives, and the mission critical nature of that data characterizes who our customers are and who we are.
On Digital Transformation in Established Industries…
Todd Winey: Many of those organizations and industries have been traditionally seen as laggards in terms of their technology adoption in the past, so the speed with which they’re moving Continue reading
The Partnership for Advanced Computing in Europe (PRACE) has selected five new small and medium-sized enterprises (SMEs) to receive access to supercomputers and HPC expertise under its SHAPE initiative. …
I’m excited to announce that Joseph Lorenzo Hall will join us as our Senior Vice President for a Strong Internet. He will start in October and be based in our Reston, VA, office.
Many of you may know Joe from his work at the Center for Democracy and Technology, where he has been Chief Technologist for about six years. He has a unique ability to put together policy and technical issues, particularly but not only with respect to security. He’s the Vice-Chair of the Board of the California Voter Foundation and a Board member of the Verified Voting Foundation. He went to school at UC Berkeley and received his PhD in Information Systems from there in 2008. A former astrophysicist, he has been working on a monograph about sand clocks, which you may know by the term “hourglass”. I am not kidding even a little when I say you should ask about it, because you will be fascinated. He brings additional strength to our already great group of people who work to make the Internet stronger.
Microsoft has introduced a new virtual WAN as a competitive differentiator and is getting enough tracking that AWS and Google may follow. At present, Microsoft is the only company to offer a virtual WAN of this kind. This made me curious to discover the highs and lows of this technology. So I sat down with Sorell Slaymaker, Principal Consulting Analyst at TechVision Research to discuss. The following is a summary of our discussion.But before we proceed, let’s gain some understanding of the cloud connectivity.Cloud connectivity has evolved over time. When the cloud was introduced about a decade ago, let’s say, if you were an enterprise, you would connect to what's known as a cloud service provider (CSP). However, over the last 10 years, many providers like Equinix have started to offer carrier-neutral collocations. Now, there is the opportunity to meet a variety of cloud companies in a carrier-neutral colocation. On the other hand, there are certain limitations as well as cloud connectivity.To read this article in full, please click here