Dug up another shot from that ticket:That.... That's a fucking mess.
Dug up another shot from that ticket:That.... That's a fucking mess.
Unfortunately that's now how things work for us. We have to support the customer's VOIP network no matter how bad the underlying network. Our tech on site ended up recabling quite a bit of that rack for free, just in the course of replacing perfectly working hardware.This is one where you send in an intern or other worthless grunt on a saturday to unfuck that mess and plug it in fresh the next day. We had some bad ones at the college I worked at, but none quite that bad. I hated that shit and it got dumped on me under the umbrella of being part of my refreshes. Saddest part of that mess is that it could have been worse. Most of the punch panel is not even in use.
The way we actually wound up cleaning that disaster was p2ving all our shit to vmware on blades when those were the new hotness.
P2V -> Physical to Virtual. VMware offers a tool to convert real hardware to VMware images.Can you ELI5 this entire statement to me?
Depends which is more important: rack space (and required cooling, network infrastructure, etc.) and automated management (with a REST API), or buying the cheapest servers known to man and let the interns handle the configuration, outages and hardware calls.Those chassis only cost like a Brinks trunk full of money.
How do you guys handle the cooling? We did some testing with submersed servers/racks (loooool) and hot aisle/cold aisle.Depends which is more important: rack space (and required cooling, network infrastructure, etc.) and automated management (with a REST API), or buying the cheapest servers known to man and let the interns handle the configuration, outages and hardware calls.
For our HPC purposes those systems tend to not really work out well, though. The densitiy with modern CPUs (who can generate heat up to 350W a socket) is too high, so you are going to have a challenge when you want to cool down multiple 42U racks stacked together (which you need to be together because of InfiniBand cabling/latencies). Current racks generate around 40 to 60 kW in heat, and those are not the typical "idle most of the time and ramp up a bit for some job" systems, but are going full throttle 24/7.
Depends on the server room and building. Hot Isle/cold isle, KyotoCooling and Vertiv-Knürr water cooled racks/doors.How do you guys handle the cooling? We did some testing with submersed servers/racks (loooool) and hot aisle/cold aisle.
In several European countries they are using the waste heat from datacenters to heat buildings and such.Depends on the server room and building. Hot Isle/cold isle, KyotoCooling and Vertiv-Knürr water cooled racks/doors.
Not every building can easily be converted to KyotoCooling. The Knürr racks depend on a cold water supply - which is great if you have a lake nearby or are in a cold climate (and can use the heat for the building itself). BMW for example moved their server center to Iceland.
The french had most fucked up idea to use HPC workloads to make a heater:
We actually do that with water pumping but I think that's for the building not the server racks...Depends on the server room and building. Hot Isle/cold isle, KyotoCooling and Vertiv-Knürr water cooled racks/doors.
Not every building can easily be converted to KyotoCooling. The Knürr racks depend on a cold water supply - which is great if you have a lake nearby or are in a cold climate (and can use the heat for the building itself). BMW for example moved their server center to Iceland.
The french had most fucked up idea to use HPC workloads to make a heater:
Grass greener on the other side and all that. Cutting costs is still a thing in big business, but in an ass-backwards, braindead kind of way. Example: Maintenance costs come from a different account, so they buy support packages for 7+ year old hardware. The support contracts for a year were TWICE as expensive as buying new, modern hardware with 3 year support included. There are workstations here equipped with a Quadro 6000. Newest NVidia drivers don't work with them anymore. Even the "legacy" 390 drivers don't work, the 340 is the "newest" one that works. Did you know you should replace RAID controller batteries every 3 to 4 years or they start to bulge and leak? We have systems where they were replaced twice.I honestly envy them for that.
Customer bought our software, which uses an SQL database. Software uses logins which are stored inside the database.
Customer forgot the login for his only admin account and locked himself out of the entire database now.
resetting the password via some SQL commands is trivial to us, a two minute job. Setting up the remote session will take longer than the acutal task itself. But as usual in this business, it's now how long the task takes, it's the knowledge of the task.
How much would you think is a fair price to charge here?
An hour of billable work in the middle of the day? Probably $300.Customer bought our software, which uses an SQL database. Software uses logins which are stored inside the database.
Customer forgot the login for his only admin account and locked himself out of the entire database now.
resetting the password via some SQL commands is trivial to us, a two minute job. Setting up the remote session will take longer than the acutal task itself. But as usual in this business, it's now how long the task takes, it's the knowledge of the task.
How much would you think is a fair price to charge here?
Being on the exact other side of the equation, I'll tell you it doesn't make any more sense on our end either. We sell new support contracts on 7+ year old hardware all the time, even 25 year old hardware we still sell contracts on. Frequently we don't even charge very much for these old contracts, despite the parts and logistics and all sorts of other problems with supporting old hardware. We sell them on the hope that once we get the foot in the door, we can sell them on all new hardware/platforms/cloud/whatever to consolidate all that old hardware.Grass greener on the other side and all that. Cutting costs is still a thing in big business, but in an ass-backwards, braindead kind of way. Example: Maintenance costs come from a different account, so they buy support packages for 7+ year old hardware. The support contracts for a year were TWICE as expensive as buying new, modern hardware with 3 year support included.