Another free and open monospaced font for code development this time from Microsoft. A key differentiator is the inclusion of ligatures for programming symbols (see below). Ligature support is rare among text editors and very rare for TTF encoded fonts. Its more common to see OTF ligatures supported. Also, no italics support yet. Creating fonts […]
Network engineers for the last twenty years have created networks from composable logical constructs, which result in a network of some structure. We call these constructs “OSPF” and “MPLS”, but they all inter-work to some degree to give us a desired outcome. Network vendors have contributed to this composability and network engineers have come to expect it by default. It is absolute power from both a design and an implementation perspective, but it’s also opinionated. For instance, spanning-tree has node level opinions on how it should participate in a spanning-tree and thus how a spanning-tree forms, but it might not be the one you desire without some tweaks to the tie-breaker conditions for the root bridge persona.
Moving to the automated world primarily means carrying your existing understanding forward, adding a sprinkle of APIs to gain access to those features programmatically and then running a workflow, task or business process engine to compose a graph of those features to build your desired networks in a deterministic way.
This is where things get interesting in my opinion. Take Cisco’s ACI platform. It’s closed and proprietary in the sense of you can’t change the way it works internally. You’re lumped with a Continue reading
Privacy statements are both a point of contact to inform users
about their data and a way to show governments the organization is committed to
following regulations. On September 17, the Internet Society’s Online Trust
Alliance (OTA) released “Are Organizations Ready for New Privacy Regulations?“
The report, using data collected from the 2018 Online Trust Audit,
analyzes the privacy statements of 1,200 organizations using 29 variables and
then maps them to overarching principles from three privacy laws around the
world: General Data Protection Regulation (GDPR) in the European Union,
California Consumer Privacy Act (CCPA) in the United States, and Personal
Information Protection and Electronics Document Act (PIPEDA) in Canada.
In many cases, organizations lack key concepts covering data
sharing in their statements. Just 1% of organizations in our Audit disclose
the types of third parties they share data with. This is a common requirement
across privacy legislation. It is not as onerous as having to list all of the
organizations; simply listing broad categories like “payment vendors”
would suffice.
Data retention is another area where
many organizations are lacking. Just 2% had language about how long and why
they would retain data. Many organizations have Continue reading
About the only thing harder than building a data center is dismantling one, because the potential for disruption of business is much greater when shutting down a data center than constructing one.The recent decommissioning of the Titan supercomputer at the Oak Ridge National Laboratory (ORNL) reveals just how complicated the process can be. More than 40 people were involved with the project, including staff from ORNL, supercomputer manufacturer Cray, and external subcontractors. Electricians were required to safely shut down the 9 megawatt-capacity system, and Cray staff was on hand to disassemble and recycle Titan’s electronics and its metal components and cabinets. A separate crew handled the cooling system. In the end, 350 tons of equipment and 10,800 pounds of refrigerant were removed from the site.To read this article in full, please click here
About the only thing harder than building a data center is dismantling one, because the potential for disruption of business is much greater when shutting down a data center than constructing one.The recent decommissioning of the Titan supercomputer at the Oak Ridge National Laboratory (ORNL) reveals just how complicated the process can be. More than 40 people were involved with the project, including staff from ORNL, supercomputer manufacturer Cray, and external subcontractors. Electricians were required to safely shut down the 9 megawatt-capacity system, and Cray staff was on hand to disassemble and recycle Titan’s electronics and its metal components and cabinets. A separate crew handled the cooling system. In the end, 350 tons of equipment and 10,800 pounds of refrigerant were removed from the site.To read this article in full, please click here
After identifying some of the challenges every network solution must address (part 1, part 2, part 3) we tried to tackle an interesting question: “how do you implement this whole spaghetti mess in a somewhat-reliable and structured way?”
The Roman Empire had an answer more than 2000 years ago: divide-and-conquer (aka “eating the elephant one bite at a time”). These days we call it layering and abstractions.
In the Need for Network Layers video I listed all the challenges we have to address, and then described how you could group them in meaningful modules (called networking layers).
We’ve been covering papers from VLDB 2019 for the last three weeks, and next week it will be time to mix things up again. There were so many interesting papers at the conference this year though that I haven’t been able to cover nearly as many as I would like. So today’s post is a short summary of things that caught my eye that I haven’t covered so far. A few of these might make it onto The Morning Paper in weeks to come, you never know!
We hear a lot from Google and Microsoft about their cloud platforms, but not quite so much from the other key industry players. So it’s great to see some papers from Alibaba and Tencent here. AliGraph covers Alibaba’s distributed graph engine supporting the development of new GNN applications. Their dataset has about 7B edges… Meanwhile, AnalyticDBContinue reading
On the heels of our recent update on image tag details, the Docker Hub team is excited to share the availability of personal access tokens (PATs) as an alternative way to authenticate into Docker Hub.
Already available as part of Docker Trusted Registry, personal access tokens can now be used as a substitute for your password in Docker Hub, especially for integrating your Hub account with other tools. You’ll be able to leverage these tokens for authenticating your Hub account from the Docker CLI – either from Docker Desktop or Docker Engine:
docker login --username <username>
When you’re prompted for a password, enter your token instead.
The advantage of using tokens is the ability to create and manage multiple tokens at once so you can generate different tokens for each integration – and revoke them independently at any time.
Create and Manage Personal Access Tokens in Docker Hub
Personal access tokens are created and managed in your Account Settings.
From here, you can:
Create new access tokens
Modify existing tokens
Delete access tokens
Creating an access token in Docker Hub.
Note that the actual token is only shown once, at the time Continue reading
A recent Amazon outage resulted in a small number of customers losing production data stored in their accounts. This, of course, led to typical anti-cloud comments that follows such events. The reality is that these customers data loss had nothing to do with cloud and everything to do with them not understanding the storage they were using and backing it up.Over Labor Day weekend there was a power outage in one of the availability zones in the AWS US-East-1 region. Backup generators came on, but quickly failed for unknown reasons. Customers’ Elastic Block Store (EBS) data is replicated among multiple servers, but the outage affected multiple servers. While the bulk of data stored in EBS was fine or was able to be easily recovered after outage, .5 percent of the data could not be recovered. Customers among the .5 percent who did not have a backup of their EBS data actually lost data.To read this article in full, please click here
Once you have received all of the education you need in order to get a mental health research job, the next thing you have to do is look for a job that you are most suited for. Depending on your education and your interest, a mental health research job ranges from being an actual researcher to a data analyst to a facility manager to a research moderator who monitors the way the research is being conducted. Oftentimes, it takes a great deal of networking and effort on your part to get the right mental health research job that you desire. Here are a few ways to network to help you find your dream job in this very special field.
3 Ways to Get a Mental Health Research Job
Take an Internship
Sometimes
you can work as an unpaid intern in a mental health research facility while
still getting your formal education. The benefit of this is the experience
you’ll receive, as well as possible class credit and professional references.
In
other cases, once you have your degree and take all the necessary tests, you
may be able to secure a paid internship or an entry level job in mental health
Continue reading
It has been two decades since Juniper Networks, then the big upstart rival to Cisco Systems and others as the dot-com boom was rising towards its crescendo several years hence, took FreeBSD Unix and turned it into a network operating system that spanned both routers and switches. …
Public clouds bring a lot of advantages to enterprises, such as more flexibility and scalability for their many of their workloads, a way to avoid expensive capital costs by using someone else’s infrastructure and having someone else manage it all, and the ability to pay only for the resources they use. …
Your IPv4 addresses are a financial asset because the market for v4 address space is rising. The question is, for how long? Guest Lee Howard joins the IPv6 Buzz podcast crew to discuss the financial implications of selling IPv4 addresses. They also discuss the performance and operational benefits of moving to IPv6.
Your IPv4 addresses are a financial asset because the market for v4 address space is rising. The question is, for how long? Guest Lee Howard joins the IPv6 Buzz podcast crew to discuss the financial implications of selling IPv4 addresses. They also discuss the performance and operational benefits of moving to IPv6.