IT/Software career thread: Invert binary trees for dollars.

Mist

Eeyore Enthusiast
<Gold Donor>
30,396
22,176
That.... That's a fucking mess.
Dug up another shot from that ticket:
1598375279554.png
 

Phazael

Confirmed Beta Shitlord, Fat Bastard
<Aristocrat╭ರ_•́>
14,107
30,198
This is one where you send in an intern or other worthless grunt on a saturday to unfuck that mess and plug it in fresh the next day. We had some bad ones at the college I worked at, but none quite that bad. I hated that shit and it got dumped on me under the umbrella of being part of my refreshes. Saddest part of that mess is that it could have been worse. Most of the punch panel is not even in use.
 

alavaz

Trakanon Raider
2,001
713
I've worked places with worse. There were TVs on the walls of the data center showing nagios and every time I'd have to patch something in, it required spreading the spaghetti a part so I could find a free port. When I'd do that, several nodes would always go red up on the tv. A lot of the cables had the little clips busted off the ends too so they'd come all the way out and I'd have to just set them precariously back in, though If it was actually something I cared about I'd crimp on a new end.

The way we actually wound up cleaning that disaster was p2ving all our shit to vmware on blades when those were the new hotness. This was in like 05 and I remember all the old dudes being pissed because vmware was just a toy for testing not for production lol.

We're actually redoing our datacenter networking at my current job. Taking out all the patch panels for "leaf and spine" switches which I guess is the new name for top of rack / end of row.
 
  • 1Like
Reactions: 1 user

Mist

Eeyore Enthusiast
<Gold Donor>
30,396
22,176
This is one where you send in an intern or other worthless grunt on a saturday to unfuck that mess and plug it in fresh the next day. We had some bad ones at the college I worked at, but none quite that bad. I hated that shit and it got dumped on me under the umbrella of being part of my refreshes. Saddest part of that mess is that it could have been worse. Most of the punch panel is not even in use.
Unfortunately that's now how things work for us. We have to support the customer's VOIP network no matter how bad the underlying network. Our tech on site ended up recabling quite a bit of that rack for free, just in the course of replacing perfectly working hardware.

We end up giving away so much free work, both on-site and remote, and our CEO wonders why we don't have enough money to staff field and NOC resources, even though he's the one that constantly pushed the "just give customers whatever they want" mentality for years. Our margins used to be so high that we could afford to give away tons of free work but that's just not how things are anymore.
 

Neranja

<Bronze Donator>
2,605
4,143
Can you ELI5 this entire statement to me?
P2V -> Physical to Virtual. VMware offers a tool to convert real hardware to VMware images.
Blades: Was once the hotness to save rack space and have a centralized management for server systems. Many manufacturers made their own ecosystem. Evolved into software defined infrastructure with newer platforms like HPE Synergy.

blades.jpg
 

alavaz

Trakanon Raider
2,001
713
^ that.

To add a little more detail you would usually trunk and port channel all of the networking so a blade chassis would drastically reduce the amount of physical cabling (i.e. 4x to 8x 10gb connections per chassis was pretty standard at the time) and that was how we were able to get rid of the spaghetti from like 600 physical servers.
 

Neranja

<Bronze Donator>
2,605
4,143
Those chassis only cost like a Brinks trunk full of money.
Depends which is more important: rack space (and required cooling, network infrastructure, etc.) and automated management (with a REST API), or buying the cheapest servers known to man and let the interns handle the configuration, outages and hardware calls.

For our HPC purposes those systems tend to not really work out well, though. The densitiy with modern CPUs (who can generate heat up to 350W a socket) is too high, so you are going to have a challenge when you want to cool down multiple 42U racks stacked together (which you need to be together because of InfiniBand cabling/latencies). Current racks generate around 40 to 60 kW in heat, and those are not the typical "idle most of the time and ramp up a bit for some job" systems, but are going full throttle 24/7.
 

Ao-

¯\_(ツ)_/¯
<WoW Guild Officer>
7,879
507
Depends which is more important: rack space (and required cooling, network infrastructure, etc.) and automated management (with a REST API), or buying the cheapest servers known to man and let the interns handle the configuration, outages and hardware calls.

For our HPC purposes those systems tend to not really work out well, though. The densitiy with modern CPUs (who can generate heat up to 350W a socket) is too high, so you are going to have a challenge when you want to cool down multiple 42U racks stacked together (which you need to be together because of InfiniBand cabling/latencies). Current racks generate around 40 to 60 kW in heat, and those are not the typical "idle most of the time and ramp up a bit for some job" systems, but are going full throttle 24/7.
How do you guys handle the cooling? We did some testing with submersed servers/racks (loooool) and hot aisle/cold aisle.
 

Neranja

<Bronze Donator>
2,605
4,143
How do you guys handle the cooling? We did some testing with submersed servers/racks (loooool) and hot aisle/cold aisle.
Depends on the server room and building. Hot Isle/cold isle, KyotoCooling and Vertiv-Knürr water cooled racks/doors.

Not every building can easily be converted to KyotoCooling. The Knürr racks depend on a cold water supply - which is great if you have a lake nearby or are in a cold climate (and can use the heat for the building itself). BMW for example moved their server center to Iceland.

The french had most fucked up idea to use HPC workloads to make a heater:
 

Mist

Eeyore Enthusiast
<Gold Donor>
30,396
22,176
Depends on the server room and building. Hot Isle/cold isle, KyotoCooling and Vertiv-Knürr water cooled racks/doors.

Not every building can easily be converted to KyotoCooling. The Knürr racks depend on a cold water supply - which is great if you have a lake nearby or are in a cold climate (and can use the heat for the building itself). BMW for example moved their server center to Iceland.

The french had most fucked up idea to use HPC workloads to make a heater:
In several European countries they are using the waste heat from datacenters to heat buildings and such.
 

Ao-

¯\_(ツ)_/¯
<WoW Guild Officer>
7,879
507
Depends on the server room and building. Hot Isle/cold isle, KyotoCooling and Vertiv-Knürr water cooled racks/doors.

Not every building can easily be converted to KyotoCooling. The Knürr racks depend on a cold water supply - which is great if you have a lake nearby or are in a cold climate (and can use the heat for the building itself). BMW for example moved their server center to Iceland.

The french had most fucked up idea to use HPC workloads to make a heater:
We actually do that with water pumping but I think that's for the building not the server racks...
 
  • 1Solidarity
Reactions: 1 user

Phazael

Confirmed Beta Shitlord, Fat Bastard
<Aristocrat╭ರ_•́>
14,107
30,198
On the topic of price, even the home stuff is coming down in cost, provided you are fine without cutting edge shit. I don't get to play with the big boys, but I have set up a few small business SANS as one offs and the costs are quite lower than you might think, often with built in management tools and integration for major manufacturers of the switches out there. But I think most smaller outfits are better off with a top end NAS rather than a low end SAN, bang for buck wise.

But in general from my grunt perspective, if you can afford to pay big up front for a powerful SAN you should because you will get bit in the ass by labor maintaining and adapting it to changing needs if you try to go cheap up front. But not all businesses can afford to do that, so for every Neranja and Mist out there who gets to play with primordial storage pools that eclipse the value of hollywood mansions, there are a hundred derps like me stringing old shit together and praying it does not die before our contract is up and we are far enough away to avoid getting caught in the meltdown caused by cutting corners and MacGuyvering shit together. I honestly envy them for that.
 

Neranja

<Bronze Donator>
2,605
4,143
I honestly envy them for that.
Grass greener on the other side and all that. Cutting costs is still a thing in big business, but in an ass-backwards, braindead kind of way. Example: Maintenance costs come from a different account, so they buy support packages for 7+ year old hardware. The support contracts for a year were TWICE as expensive as buying new, modern hardware with 3 year support included. There are workstations here equipped with a Quadro 6000. Newest NVidia drivers don't work with them anymore. Even the "legacy" 390 drivers don't work, the 340 is the "newest" one that works. Did you know you should replace RAID controller batteries every 3 to 4 years or they start to bulge and leak? We have systems where they were replaced twice.

On the bright side we actually get to dump our old HP G7 hardware because they are not RHEL 7 certified, and RHEL 6 goes out of support. Luckily those old hardware controllers aren't supported in RHEL 7 anymore.

Problem with corporate culture is all the processes, different departments and compliance always hanging over your head. Want to buy a new filer? Yeah, every deptartment wants to have a stake in it, and then the negotiations start: "Why do you want to buy a NetApp? An Isilon would be cheaper. Have you seen the new 3Par NAS offering from HPE? Whatr do you mean you need dual-protocol CIFS and NFS? On the same data? Why do you need this 'cluster storage' and what is this 'Lustre'? Why are you buying Spectrum Scale, I thought we were using GPFS?"

On the bright side, we also get to have some fresh hardware that is supposedly maturing at the customer, like the Dell PowerEdge C6525. Recently replaced lots of NICs because fuckup. Do you know what a pain in the ass it is to change MAC adresses in the database? There's a process for that, but the one doing the grunt work is a poor dude sitting in India who gives zero fucks if you have LACP trunks. Then you open a ticket, and they tell you "if you want to change MAC adresses ..."

Funniest thing lately was the iDRAC update from Dell that installs the update via a CentOS image ... that got installed on the system disk, wiping the OS.
 

Quineloe

Ahn'Qiraj Raider
6,978
4,463
Customer bought our software, which uses an SQL database. Software uses logins which are stored inside the database.

Customer forgot the login for his only admin account and locked himself out of the entire database now.

resetting the password via some SQL commands is trivial to us, a two minute job. Setting up the remote session will take longer than the acutal task itself. But as usual in this business, it's now how long the task takes, it's the knowledge of the task.

How much would you think is a fair price to charge here?
 

TJT

Mr. Poopybutthole
<Gold Donor>
40,932
102,735
Customer bought our software, which uses an SQL database. Software uses logins which are stored inside the database.

Customer forgot the login for his only admin account and locked himself out of the entire database now.

resetting the password via some SQL commands is trivial to us, a two minute job. Setting up the remote session will take longer than the acutal task itself. But as usual in this business, it's now how long the task takes, it's the knowledge of the task.

How much would you think is a fair price to charge here?

Are you a SAAS product? If so this would be included in the support you're paying for. If not, $500 eh?
 

Mist

Eeyore Enthusiast
<Gold Donor>
30,396
22,176
Customer bought our software, which uses an SQL database. Software uses logins which are stored inside the database.

Customer forgot the login for his only admin account and locked himself out of the entire database now.

resetting the password via some SQL commands is trivial to us, a two minute job. Setting up the remote session will take longer than the acutal task itself. But as usual in this business, it's now how long the task takes, it's the knowledge of the task.

How much would you think is a fair price to charge here?
An hour of billable work in the middle of the day? Probably $300.
 

Mist

Eeyore Enthusiast
<Gold Donor>
30,396
22,176
Grass greener on the other side and all that. Cutting costs is still a thing in big business, but in an ass-backwards, braindead kind of way. Example: Maintenance costs come from a different account, so they buy support packages for 7+ year old hardware. The support contracts for a year were TWICE as expensive as buying new, modern hardware with 3 year support included.
Being on the exact other side of the equation, I'll tell you it doesn't make any more sense on our end either. We sell new support contracts on 7+ year old hardware all the time, even 25 year old hardware we still sell contracts on. Frequently we don't even charge very much for these old contracts, despite the parts and logistics and all sorts of other problems with supporting old hardware. We sell them on the hope that once we get the foot in the door, we can sell them on all new hardware/platforms/cloud/whatever to consolidate all that old hardware.

It rarely works out though. Once the customer realizes that we can adequately support the old hardware with refurbished parts that we have pre-staged in crash kits all over the country, why would they bother to invest in new hardware when they found some sucker who would support all their old shit?

The main thing driving customers to finally ditch their old shit is security compliance, those 10+ year old systems cannot be patched up to anything remotely compliant so they end up having to dump them when they get audited internally or by a customer.
 
  • 1Solidarity
Reactions: 1 user