Key Points:
Reach out to your Customer Success Manager to gain more information on how they can accelerate your business.
Hi there. My name is Jake Jones and I’m a Customer Success Manager at Cloudflare covering the Middle East and Africa. When I look at what success means to me, it’s becoming a trusted advisor for my customers by taking a genuine interest in their priorities and helping them reach desired goals. I’ve learnt that successful partnerships are a byproduct of successful relationship building. Every Continue reading
Over the past ten years, the world generally has noticed serious social media websites like MySpace, Twitter, and Facebook which have all generated different ways for people to interact and connect with other people. Facebook is known as the biggest social website. Today we have more than a billion users that make use of Facebook. In the future, with kids growing up and joining the social media platform, the number of people will increase. Facebook has been used for both personal and business communication, and its usage has brought lots of advantages in terms of sharing ideas, increasing connectivity, and learning online.
But as time passes, some studies have merged networking online with different disorders that come from the minds of the users. These disorders include low self-esteem, anxiety, depression, and a lot of others. Because social media is a product of the 21st century, lots of questions relating to their impact on the health of its users mentally haven’t gotten desired answers yet. Because these online services are linked to the general population, any confirmed connection in the future between these diseases and these social media platforms could turn to a serious problem.
The modern application is dynamic and highly adaptive to changes in demand. It lives across multiple clusters and clouds. And it is highly distributed with hundreds of microservices servicing the requirements of rapid feature releases, high resiliency, and on demand scalability. In such a world, we simply cannot afford to continue to rely solely on the network architectures of the last decade.
Modern applications need a Modern Network—one that simplifies operations, reduces IT overhead and prioritizes user needs—such that organizations can empower users with fast, reliable and secure application access wherever and whenever they do business, regardless of the underlying infrastructure or connectivity. This requires adopting the public cloud—or even multiple public clouds—as an extension of on-premises infrastructure. What enterprises need is a common, multi-dimensional framework that provides availability, resiliency, and security for modern applications, with the ability to abstract connectivity, identity, and policy via declarative intents. These dimensions of control are paramount for modern applications – improving the visibility and control of assets that are ephemeral in nature and not directly under the Continue reading
Digital transformation has changed the way applications are deployed and consumed. The end-user to application journey has become increasingly complex and is a key objective for the Modern Network. End-users are more distributed, and applications run on heterogenous infrastructure often delivered from on-prem data centers, IaaS, SaaS, and public cloud locations. On average, enterprises use hundreds of applications. The number of end-user and IoT devices have also increased exponentially. They include infusion pumps in hospitals to Point of Sale systems in retail. These devices access applications from manufacturing floor, carpeted offices, homes or while users are on the move. As more devices and applications are enabled, the network increases in both complexity and value to the enterprise.
What has become increasingly clear is the need for advanced self-healing solutions that compensate for this complexity by helping IT teams shift to a proactive mode of operating a network. Several tools exist that provide domain or service-specific insights, but it is left to the IT teams to make sense of the volumes of data generated by these fragmented solutions to detect issues and perform root cause analysis. The dynamic nature of the network, device density, and the volume of data and Continue reading
Enterprises are growing increasingly dependent on modern distributed applications to innovate and respond quickly to new market challenges. As applications grow in significance, the end-user experience of the application has become a key differentiator for most businesses. Understanding what kind of application performance the end-users experience, optimizing the infrastructure, and quickly identifying the source of any issues has become extremely critical.
The Modern Network framework puts the end-user experience at the forefront. It helps our customers provide the public cloud experience on-premise with an on-demand network that enforces secure connectivity and service objectives across on-premise and cloud environments. As applications become more distributed, the increased application resiliency and efficiency often comes at the cost of increased contention for shared resources. The dynamic nature of the network, device density, and the volume of data and transactions generated makes this even more challenging. Managing network complexity and simplifying network operations in such environments requires a well architected network with support for modern cloud concepts such as availability zones that provide fault tolerance. Similarly, effective network-level fault isolation requires the ability to create self-contained fault domains that facilitate network resiliency, disaster recovery and avoidance, and end-to-end root cause(s) analysis throughout the Continue reading
Achieving 100 Gbps intrusion prevention on a single server, Zhao et al., OSDI’20
Papers-we-love is hosting a mini-event this Wednesday (18th) where I’ll be leading a panel discussion including one of the authors of today’s paper choice: Justine Sherry. Please do join us if you can.
We always want more! This stems from a combination of Jevon’s paradox and the interconnectedness of systems – doing more in one area often leads to a need for more elsewhere too. At the end of the day, there are three basic ways we can increase capacity:
Options 1 and 2 are of course the ‘scale out’ options, whereas option 3 is ‘scale up’. With more nodes and more coordination comes more complexity, both in design and operation. So while scale out has seen the majority of attention in the cloud era, it’s good to remind ourselves periodically just what we really can do on a single Continue reading
Juniper’s official documentation on ZTP explains how to configure the ISC DHCP Server to automatically upgrade and configure on first boot a Juniper device. However, the proposed configuration could be a bit more elegant. This note explains how.
TL;DR
Do not redefine option 43. Instead, specify the vendor
option space to use to encode parameters with vendor-option-space
.
When booting for the first time, a Juniper device requests its IP address through a DHCP discover message, then request additional parameters for autoconfiguration through a DHCP request message:
Dynamic Host Configuration Protocol (Request) Message type: Boot Request (1) Hardware type: Ethernet (0x01) Hardware address length: 6 Hops: 0 Transaction ID: 0x44e3a7c9 Seconds elapsed: 0 Bootp flags: 0x8000, Broadcast flag (Broadcast) Client IP address: 0.0.0.0 Your (client) IP address: 0.0.0.0 Next server IP address: 0.0.0.0 Relay agent IP address: 0.0.0.0 Client MAC address: 02:00:00:00:00:01 (02:00:00:00:00:01) Client hardware address padding: 00000000000000000000 Server host name not given Boot file name not given Magic cookie: DHCP Option: (54) DHCP Server Identifier (10.0.2.2) Option: (55) Parameter Request List Length: 14 Parameter Request List Item: (3) Router Parameter Request List Item: (51) IP Continue reading
From AI is wrestling with a replication crisis (HT: Drew Conry-Murray)
Last month Nature published a damning response written by 31 scientists to a study from Google Health that had appeared in the journal earlier this year. Google was describing successful trials of an AI that looked for signs of breast cancer in medical images. But according to its critics, the Google team provided so little information about its code and how it was tested that the study amounted to nothing more than a promotion of proprietary tech (emphasis mine).
No surprise there, we’ve seen it before (not to mention the “look how awesome we are, but we can’t tell you the details” Jupiter Rising article).