I managed to get an SSH client working using an SSH pubkey protected by a TPM.
This is not needed, since TPM operations only need well known SRK PIN, not owner PIN, to do useful stuff. I only document it here in case you want to do it. Microsoft recommends against it.
Set OSManagedAuthLevel
to 4
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\TPM\OSManagedAuthLevel
2 -> 4
Reboot.
Clear TPM
Run tpm.msc
and choose “Clear TPM”. The machine will reboot
and ask you to press F12 or something for physical proof of
presence to clear it.
Set owner password from within tpm.msc
Create key
tpmvscmgr.exe create /name "myhostnamehere VSC" /pin prompt /adminkey random /generate
PIN must be at least 8 characters.
Create CSR
Create a new text file req.inf
:
[NewRequest]
Subject = "CN=myhostnamehere"
Keylength = 2048
Exportable = FALSE
UserProtected = TRUE
MachineKeySet = FALSE
ProviderName = "Microsoft Base Smart Card Crypto Provider"
ProviderType = 1
RequestType = PKCS10
KeyUsage = 0x80
certreq -new -f req.inf myhostname.csr
If you get any errors, just reboot and try again with the command that failed.
Get the CSR signed by any Continue reading
Stumbled upon this via HighScalability:
Every time I feel like I'm "out of touch" with the hip new thing, I take a weekend to look into it. I tend to discover that the core principles are the same [...]; or you can tell they didn't learn from the previous solution and this new one misses the mark, but it'll be three years before anyone notices (because those with experience probably aren't touching it yet, and those without experience will discover the shortcomings in time.)
Yep, that explains the whole centralized control plane ruckus ;) Read also a similar musing by Ethan Banks.
It's revenues have to keep doubling though to make the numbers work.
Spanning Tree Protocol (STP) free network inside Data Centre is main focus for network vendors and many technologies have been introduced in recent past to resolve STP issues in data centre and ensure optimal link utilization. Advent of switching modules inside blade enclosures coupled with the requirements for optimal link utilization starting right from blade server has made today’s Data Centre network more complex.
In this blog , we will discuss how traditional model of network switches placement (End of Row) can be coupled with blade chassis with different options available for end to end connectivity / high availability.
Network Switches are placed in End of Row and in order to remove STP Multi-Chassis Link Aggregation (MC-LAG) is deployed. Please see one of my earlier blog for understanding of MC-LAG.
Option 1: Rack mounted servers for computing machines, servers have installed multiple NICs in Pass-Though module and Virtual Machines hosted inside servers require Active/Active NIC Teaming.
Option 2 : Blade Chassis has multiple blade servers and each blade servers has more than 1 NIC (which are connected with blade chassis switches through internal fabric link). Virtul Machines hosted inside blade servers require active/active NIC teaming.
Option 3 : Blade Chassis Continue reading
Todays show looks at OpenStack networking in a high-stakes production environment: Paddy Power Betfair, a publicly-traded betting and gaming exchange. The post Show 310: High-Stakes OpenStack Networking appeared first on Packet Pushers.
The post Worth Reading: Building the LinkedIn Knowledge Graph appeared first on 'net work.