IT/Software career thread: Invert binary trees for dollars.

Neranja

<Bronze Donator>
3,069
5,010
The higher ups think this will be solved if we just 'train it on the codebase' but it isn't that simple
This is one of the fundamental problems of AI: You cannot train AI on what are essentially niche problems, as current AI technology is technically just averaging data and building probability vectors from it.

Niche problems and are niche for a reason, and you can't pad the training data with stackoverflow answers. Also, your codebase will never modify the underlying model.
 
  • 2Like
Reactions: 1 users

TJT

Mr. Poopybutthole
<Gold Donor>
46,442
126,848
As I said before. AI is fantastic when the problem you need to solve exists in a vacuum. The simpler the codebase and the less external dependencies the better. Once you start adding on the dozen external dependencies and loosely connected systems that define modern software development AI starts sucking hard.

It needs to be given small size problems in general.
 
  • 2Like
Reactions: 1 users

TJT

Mr. Poopybutthole
<Gold Donor>
46,442
126,848
This year's is going to be way more complicated. Gaming it by code generation bloat wont be what gets you to "top performance" this year I am certain of that. The real marker is going to be how much of it makes it into feature branches that are merged to prod code bases. Which is also tracked. The engineering team already uses Cursor like 30% more than Copilot. I am happy its my team building all of this shit so I am certain to max it out though!

I will add that the logical evolution of this concept is to measure this against sprint ticket time to resolution. This will be complete bullshit but it is coming.
 

Noodleface

A Mod Real Quick
39,391
17,841
We started a scrum at scale initiative and I HATE it. Every chance I get I tell management how much time it's wasting.

Last week I overheard someone complaining to someone else that I (by name) didn't have X story in the sprint but this other guy was helping me and he did have Y story for that effort and they were gonna talk to me about it. When they came by I told them I didn't care and stop micromanaging me. I hate going against the current but i.dont need other engineers trying to gauge my work when they aren't even involved.
 
  • 1Like
Reactions: 1 user

Haus

I am Big Balls!
<Gold Donor>
18,496
76,436
This year's is going to be way more complicated. Gaming it by code generation bloat wont be what gets you to "top performance" this year I am certain of that. The real marker is going to be how much of it makes it into feature branches that are merged to prod code bases. Which is also tracked. The engineering team already uses Cursor like 30% more than Copilot. I am happy its my team building all of this shit so I am certain to max it out though!

So essentially, you're automating Elon's repetitive "Send me a list of 5 things you got done this week" emails.. but with AI.
 

Haus

I am Big Balls!
<Gold Donor>
18,496
76,436
We started a scrum at scale initiative and I HATE it. Every chance I get I tell management how much time it's wasting.

Last week I overheard someone complaining to someone else that I (by name) didn't have X story in the sprint but this other guy was helping me and he did have Y story for that effort and they were gonna talk to me about it. When they came by I told them I didn't care and stop micromanaging me. I hate going against the current but i.dont need other engineers trying to gauge my work when they aren't even involved.
Make sure to start calling the people running the meetings "Scrumbags"
 
  • 3Worf
Reactions: 2 users

Sheriff Cad

scientia potentia est
<Nazi Janitors>
30,754
72,402
As I said before. AI is fantastic when the problem you need to solve exists in a vacuum. The simpler the codebase and the less external dependencies the better. Once you start adding on the dozen external dependencies and loosely connected systems that define modern software development AI starts sucking hard.

It needs to be given small size problems in general.
I realize it’s quite different bu this is how I use AI in my practice, I use it to search records for things (Grok is actually really good at that, I can upload a bunch of PDF’s and say “give me the 3 lines around each instance of this word or any synonym As well as the page and line number” and it contextualizes thousands of pages of documents quickly. It can summarize depos (I still edit the summary but in general depending on how you prompt it it does a good job).

But I generally give it bite-size problems, answer this question, draft this summary, etc. It does really well at that. Sprawling problems, as you say, not so much. The answers tend to get more random the more sprawling it is.
 
  • 3Like
Reactions: 2 users

Kirun

Buzzfeed Editor
21,191
18,231
Yes, karma.

When other industries have/had been getting fucked over the past 10+ years, a large swathe of the "tech bros" on these forums were laughing their asses off at all the "LRN2CODE" memes circling the internet. It's hilariously ironic watching all the "AI" stuff unfold now that the shoe is on the other foot.

Insert Niemoller quote, I guess...
 

Neranja

<Bronze Donator>
3,069
5,010
When other industries have/had been getting fucked over the past 10+ years, a large swathe of the "tech bros" on these forums were laughing their asses off at all the "LRN2CODE" memes circling the internet. It's hilariously ironic watching all the "AI" stuff unfold now that the shoe is on the other foot.
I still remember this shit, but it wasn't the "tech bros" ... it was the journalists gloating on the fact that blue collar people lost their jobs, and that they should just "learn to code". There's a famous Wired article from 2017 basically arguing that "coding" is the next blue collar job, because it's a stable 40 h/week job. Which tells you everything what those journalists actually know about any IT job.

The "tech bros" you are mentioning mostly can't code either, and are basically glorified salesmen. One of the foremost "tech bros" in our era is Sam Altman, who never successfully created a business: His claim to fame is a failed startup called Loopt, which was quickly bought out for $43 million by a bank ... and then quickly shuttered once they realized the tech is worthless and they were lied to about user numbers.

The ugly truth: Anyone that really, REALLY studied computer science actually realizes that "coding", as in writing down the actual program, is only a fraction of the job--and it is often the least interesting part--when you have an actual problem to solve under time and/or budget constraints. Doesn't mean it's not fun when you are tinkering with things in your free time.

There are rumors that Oracle had some highly paid database gurus that had their own assigned "interns", which would write all the boilerplate code for them, only for the guru to specify the architecture and then fill in the "interesting" parts. Which is exactly what AI is currently good for.

Oh, and in the end, when journalists were starting to get laid off, they suddenly claimed that "learn to code" was hate speech because it hurt their feelings.
 
Last edited:
  • 2Like
Reactions: 1 users

Noodleface

A Mod Real Quick
39,391
17,841
I realize it’s quite different bu this is how I use AI in my practice, I use it to search records for things (Grok is actually really good at that, I can upload a bunch of PDF’s and say “give me the 3 lines around each instance of this word or any synonym As well as the page and line number” and it contextualizes thousands of pages of documents quickly. It can summarize depos (I still edit the summary but in general depending on how you prompt it it does a good job).

But I generally give it bite-size problems, answer this question, draft this summary, etc. It does really well at that. Sprawling problems, as you say, not so much. The answers tend to get more random the more sprawling it is.
On the flip side let me provide an anecdote on why I don't trust it to do the first part of your point. This is model specific I realize as we can't use grok.

I asked AI to search a spec for me on something very simple - can this code execute in this high privelege mode (think root access for firmware). It not only told me it couldn't, but it quoted the spec and provided me with which section of the spec it was in. Except when I went back... The section didn't exist, and worse, the quote didn't exist at all. In fact Google searching the quote showed zero results. It literally made up a bunch of shit because it wanted to give me a positive result rather than just give me the facts.

Even worse when I gave this data to the prompt it agreed it made it up and apologized.

Now I realize this is a very specific problem I found, but it changed my view on using AI to search documents. If I have to start with zero trust in what AI gives me and I have to end up confirming everything anyways. Is it really helping me?

On the topic of how long we spend coding vs everything else we do, execs definitely do not understand. They also don't understand that the higher up you get in engineering levels, you end up writing a lot less. Hell, I'm lucky if I do a commit every couple weeks. I think the execs are being sold snake oil.
 
  • 1Like
Reactions: 1 user

TJT

Mr. Poopybutthole
<Gold Donor>
46,442
126,848
Now I realize this is a very specific problem I found, but it changed my view on using AI to search documents. If I have to start with zero trust in what AI gives me and I have to end up confirming everything anyways. Is it really helping me?
Just FYI Cursor indexes documents directly so this doesn't happen.

 

Sheriff Cad

scientia potentia est
<Nazi Janitors>
30,754
72,402
On the flip side let me provide an anecdote on why I don't trust it to do the first part of your point. This is model specific I realize as we can't use grok.

I asked AI to search a spec for me on something very simple - can this code execute in this high privelege mode (think root access for firmware). It not only told me it couldn't, but it quoted the spec and provided me with which section of the spec it was in. Except when I went back... The section didn't exist, and worse, the quote didn't exist at all. In fact Google searching the quote showed zero results. It literally made up a bunch of shit because it wanted to give me a positive result rather than just give me the facts.

Even worse when I gave this data to the prompt it agreed it made it up and apologized.
Yea, you always have to sanity check what it tells you, but especially on big questions it can give you really good starting points. At least right now the people just cut/pasting what it puts out into their documents/code base make it really obvious what they're doing from the errors.

But if AI can give you a jump start on what would either just be a slog to sit and review those docs manually, yea thats a plus. But doesn't mean you're not checking on what it says.
 

Aldarion

Egg Nazi
11,367
31,187
It literally made up a bunch of shit because it wanted to give me a positive result rather than just give me the facts.
This right here.

I don't mean just the hallucinations themselves. Thats been done to death.

I mean this insane, pathetic desire to please. Every LLM I've worked with has the same bias.

Every idea I give it is a "great idea!" Every question I ask that begins "Can I..." is answered with "Absolutely! Here's how"

Reality is full of "no" and "it doesnt work like that" and "thats a terrible idea". AI doesnt do a good job at reflecting reality in this respect, in my experience. I think this will be at the root of some of the more hilarious AI problems as it gets integrated into more and more of everything.
 
  • 1Like
Reactions: 1 user

Khane

Got something right about marriage
21,518
15,419
Yes, karma.

When other industries have/had been getting fucked over the past 10+ years, a large swathe of the "tech bros" on these forums were laughing their asses off at all the "LRN2CODE" memes circling the internet. It's hilariously ironic watching all the "AI" stuff unfold now that the shoe is on the other foot.

Insert Niemoller quote, I guess...

If you say so there bud.
 

Kirun

Buzzfeed Editor
21,191
18,231
The ugly truth: Anyone that really, REALLY studied computer science actually realizes that "coding", as in writing down the actual program, is only a fraction of the job--and it is often the least interesting part--when you have an actual problem to solve under time and/or budget constraints. Doesn't mean it's not fun when you are tinkering with things in your free time.
"Coding is only a tiny, unimportant part of REAL computer science."

Interesting! Because for years, we were told that coding was the job. That was the bootcamp promise. The justification for telling displaced workers to just retrain in six months and stop complaining. Now, conveniently, coding is just grunt work, while the true value lies in "architecture," "vision," and "thinking really hard". Skills that, coincidentally, can't be easily tested or automated yet.

But this latest moral panic over "coding", AI, and "outsourcing" isn't about coding being misunderstood. It's about the mask slipping. Tech spent years insisting its skills were both easy enough for anyone to pick up and so complex they deserved elite pay and deference. AI is exposing that contradiction. And now the same people who once told others to adapt are scrambling to explain why their jobs are special, nuanced, and irreplaceable.

Funny how "magical tech science" only becomes sacred the moment it's no longer exclusive.
 

TJT

Mr. Poopybutthole
<Gold Donor>
46,442
126,848
Yes, karma.

When other industries have/had been getting fucked over the past 10+ years, a large swathe of the "tech bros" on these forums were laughing their asses off at all the "LRN2CODE" memes circling the internet. It's hilariously ironic watching all the "AI" stuff unfold now that the shoe is on the other foot.

Insert Niemoller quote, I guess...

This isn't really accurate. The LRN2CODE stuff was legitimately funny but not because it was haranguing blue collars. It was all being done by journalists as stated. It was funny to me because it universally seemed to be targeted at the coding bootcamp crowd in execution.

Whether or not you realize it the Bootcamp people who came into the industry by and large came in to do frontend engineering. Which is the gayest and most replaceable form of coding. I don't even think anyone on the forum is a professional frontend engineer. Not one. These jobs were always bottom rung and frontend is by far the most susceptible to full on AI automation workflows.

Such as your UX designer having an AI workflow of designing the UI then having the agent actually code all of it. This is an active use case right now. Cutting out the frontend web developer.
 
  • 1Like
Reactions: 1 user

TJT

Mr. Poopybutthole
<Gold Donor>
46,442
126,848
"Coding is only a tiny, unimportant part of REAL computer science."

Interesting! Because for years, we were told that coding was the job. That was the bootcamp promise. The justification for telling displaced workers to just retrain in six months and stop complaining. Now, conveniently, coding is just grunt work, while the true value lies in "architecture," "vision," and "thinking really hard". Skills that, coincidentally, can't be easily tested or automated yet.

But this latest moral panic over "coding", AI, and "outsourcing" isn't about coding being misunderstood. It's about the mask slipping. Tech spent years insisting its skills were both easy enough for anyone to pick up and so complex they deserved elite pay and deference. AI is exposing that contradiction. And now the same people who once told others to adapt are scrambling to explain why their jobs are special, nuanced, and irreplaceable.

Funny how "magical tech science" only becomes sacred the moment it's no longer exclusive.
IDK why you're so full of hate bro.

The more senior you are the less coding you do. That's what happens. At the junior level you are coding all the time in a more narrow set of problems than exists in your organization. You gradually move into having to deal with architecture decisions, which have more impact than coding directly.

Outsourcing and AI replacing of the entry level does what to your pipeline of developing graybeards who are highly knowledgeable on said topics?