IT/Software career thread: Invert binary trees for dollars.

ShakyJake

<Donor>
8,302
20,773
That's literally the problem that agile is trying to fix. It sounds like you're just doing waterfall with post-its (which is absurdly common imo).
Yeah, it feels that way. But we had a company come in and train us on SAFe and have been observing us for the past couple PIs. They haven't told us we're doing it wrong.
 

Control

Silver Baronet of the Realm
4,676
13,027
Yeah, it feels that way. But we had a company come in and train us on SAFe and have been observing us for the past couple PIs. They haven't told us we're doing it wrong.
Well, there's a lot of subjectivity in how you carry out a high level process. I'm sure there are a hundred ways to do things "correctly" while still making it a hellish process.
Maybe you're doing that flavor of "agile" correctly and it's just a shitty experience. There are plenty of ways to boost productivity (over some time span) that employees don't like.
Also possible that your trainers are shitty and don't actually know their source material (and quite likely that, even if they're solid, they've never used their process in actual practice themselves. Otherwise, why are they agile trainers instead of developers?).
Also possible that the process was "adjusted" by management.

If you think your input could have any impact on the process (or if you just want to be right for spite's sake), go grab and read all of the original source material and maybe some other agile flavors too. At least then you can be confident in who's fucking things up.

Although, I'll still stand behind the idea that if you're extensively planning every detail of your project and have hard deadlines imposed from the start, then you're just doing waterfall with extra steps (and labels that make management feel like they're being useful). I mean, it's literally saying, "we need exactly this feature list, implemented exactly this way, and completed by exactly this date... BUT we're going to use AGILE! <and then everyone clapped>"
 

TJT

Mr. Poopybutthole
<Gold Donor>
45,322
120,338
If you inject tons of shit during a sprint you are breaking agile. It happens all the time but it's the truth.
 
  • 1Like
Reactions: 1 user

Neranja

<Bronze Donator>
2,884
4,634
Also possible that the process was "adjusted" by management.
In most cases this is the reason. Middle management is deeply uncomfortable with letting the "herd of cats run around the ball of yarn" and develop things in a truly agile way, because in the end they have to justify the very reason of them being there. You know, managing the project. And setting deadlines. I'm pretty sure the consultants that were hired had specific goals and limits set by management.

The other reason is that either the stakeholder liason (or sometimes the "project owner") is way over his head and was only chosen because he didn't have any other important things to do. How hard can it be bossing people around?
That is also not a fun experience: because now you have built something and weeks down the line people pop up that request changes, sometimes architectural and scope-changing, and more often than not are even venting their frustration that they weren't asked about their important features. But you can fix it quickly, because you are "agile", right?

And this is why "agile" is not a silver bullet. On a fundamental level, It relies on people communicating and trusting each other. Corporate culture (especially IT) is overrun by people who can't communicate properly (autism spectrum), or on the other end people that talk too much, and only talk. What Germans call "Dampfplauderer", even though the Austrian original had a different meaning. Agile and corporate culture are basically like holy water and vampires.
 
  • 1Like
  • 1Solidarity
Reactions: 1 users

Phazael

Confirmed Beta Shitlord, Fat Bastard
<Gold Donor>
15,495
34,419
Agile (and similar workflows) can be very effective when used how it is designed to be used, namely development environments where you cannot predict scheduling and need a lot of smaller groups to work independently and quickly. It is absolutely the death of middle management, so they do all they can to fuck with it. In my practical experience, the way it goes in the real world is like this:

Select someone with no real skills aside from maybe power point to fill all of the POs and Scrum Masters. This will invariably wind up being a mix of useless middle managers and younger women who are either bullies of flirts. These people will have excessive meetings and obsess over story point values and quickly turn everything into Waterfall with extra steps, because its in their interests to do this or its all they know. Goal posts will shift and Story Points will largely become an arbitrary metric that people use to make it look like they are doing more work than they are (classic Poo move, but Karens do this shit a lot). The 80/20 shit starts happening almost instantly and you end up witth the ass draggers doing one or two things where the points are massively inflated, while the decent workers end up with pile of small story point items at actually amount to much more work. A lot of shit will not be properly tracked and the bad actors of each team (again Poos and Karens mostly) will cherry pick easy items and not lift a finger on anything else.

Eventually the middle managers will want to look busy and start micromanaging the fuck out of everyone while claiming as much credit as they can. Then the final fuck over will come when they start constantly re-arranging teams to obfuscate useless fucks who are politically popular in your department and scope creep goes largely unchecked. The whole thing will crash and burn, then they will restart the whole thing claiming to want to "get it right this time" while putting the same people who fucked it all up in charge again. Repeat process again until the incompetent leaders get Peter Principled out and maybe someone who actually understands Agile gets put somewhere they can make an impact. If you happen to be in one of the teams or groups that is successful, every other group will do all they can to poach your talent and hitch their wagon to leech off your efforts.
 
  • 3Like
Reactions: 2 users

Haus

I am Big Balls!
<Gold Donor>
16,935
68,834
How I feel relaying information to our R&D team for requested new features right now...
1757560276500.png
 
  • 1Truth!
  • 1Worf
Reactions: 1 users

Deathwing

<Bronze Donator>
17,429
8,456
Until AI can figure out why a java based XML parser from 20 years ago is throwing null pointer exceptions, but only when the XML goes over 2GB, I feel safe.







I can't figure it out either.
 
  • 3Worf
Reactions: 2 users

stupidmonkey

Not Smrt
<Gold Donor>
2,404
6,491
Until AI can figure out why a java based XML parser from 20 years ago is throwing null pointer exceptions, but only when the XML goes over 2GB, I feel safe.







I can't figure it out either.
32 bit INT based limit of some sort trying to access an indice that doesn't exist. Sometimes they wrap around and go negative and boom.
 
  • 1Trump
  • 1WTF
Reactions: 1 users

Deathwing

<Bronze Donator>
17,429
8,456
32 bit INT based limit of some sort trying to access an indice that doesn't exist. Sometimes they wrap around and go negative and boom.
Yeah, I've considered that and am saving that for last because fixing that requires upgrading not just xerces, but a whole lot of dependencies, including the OS.

Would like to prove it too but the inner exception isn't being printed and my java is quite rusty.
 

Khane

Got something right about marriage
21,186
15,067
I've had issues like that in the past in C# parsers and it was almost always a byte array overflow (related to INT32 as mentioned above)
 

ShakyJake

<Donor>
8,302
20,773
Until AI can figure out why a java based XML parser from 20 years ago is throwing null pointer exceptions, but only when the XML goes over 2GB, I feel safe.







I can't figure it out either.

ChatGPT 5 says,
Here are the most common technical culprits when an old Java XML parser crashes specifically once the input grows beyond ~2 GB.


Likely root causes​


  1. 32-bit int overflow on sizes/offsets.
    Counters for “bytes read,” “character count,” “file position,” or “buffer length” stored in int will wrap past Integer.MAX_VALUE (2,147,483,647), producing negative values and cascading logic errors that end in NPEs.
  2. Casting long → int.
    Using (int) file.length() or (int) inputStream.skip(n) on >2 GB inputs truncates, often leading to undersized buffers or bogus indexes and later null dereferences.
  3. Attempt to allocate giant single arrays/strings.
    Code that tries to read the whole document into one byte[]/char[] or build a single StringBuilder can hit:
    • OutOfMemoryError (sometimes swallowed and mis-translated), or
    • internal states left uninitialized after a failed allocation, leading to a later NPE.
  4. Memory-mapped I/O limits.
    FileChannel.map/MappedByteBuffer sizes are int-bounded in many legacy implementations; mapping >2 GB sections, or using position(int) on a logical long offset, causes state corruption.
  5. DOM building on huge docs.
    Tree builders (DOM) scale superlinearly in memory. Low-memory fallbacks or pruning paths might set nodes/tables to null when limits are exceeded, then dereference them.
  6. SAX handler bugs triggered by record counts.
    Old code often uses int element/attribute counters; when they overflow, downstream tables (e.g., intern pools, name caches) lookups fail (get() returns null) and are dereferenced.
  7. UTF encoding size mismatch.
    Byte counts used as character counts (e.g., treating UTF-8 bytes as chars) work until large inputs magnify the error. Oversized/undersized char[] can cause partial initialization and NPEs in follow-up stages.
  8. Entity expansion tables keyed by int positions.
    Systems that store entity start/end offsets as int can misresolve references once offsets overflow, yielding missing nodes and null symbol entries.
  9. InputStream.available() misuse.
    Using available() to pre-size buffers or to decide EOF is unreliable; on very large files this yields wrong allocations, then null state paths.
  10. mark/reset with small read-ahead limits.
    A tiny mark limit compared to file size causes the underlying stream to drop buffers; parsers that assume reset() succeeded may proceed with null buffers.
  11. Old Xerces/JAXP bugs (era-specific).
    Early 2000s builds had fixes related to very large documents, entity handling, and symbol tables that manifest at extreme sizes.
  12. Custom filters/wrappers leaking null.
    If the “parser” sits behind a homegrown Reader/InputStream wrapper, overflow or failed realloc can set internal buffers to null.
  13. Logging/instrumentation side effects.
    Some code only logs every N bytes; when N overflows or modulo math breaks, the “log sink” object may be null at first use in large runs.
  14. RandomAccessFile / channel position drift.
    Libraries that mix RandomAccessFile (long offsets) with arrays (int indexes) sometimes reconcile positions incorrectly after 2 GB, leaving current token/buffer objects null.
  15. GC/heap pressure masking earlier failure.
    Out-of-memory near the 2 GB mark can be caught/ignored in a lower layer; subsequent code assumes an object exists and dereferences null.

What to check (fast triage)​


  • Stack trace frames just before the NPE. Look for get(), peek(), or table lookups returning null—that’s often where an overflow corrupted a key/index.
  • Search the code for casts (int) on lengths/positions, array allocations with computed sizes, or arithmetic on counts without long.
  • Verify input handling path. Is it trying to read the entire file into memory? DOM vs SAX/StAX?
  • Inspect character decoding. Any place that converts bytes→chars using byte counts to size char buffers.
  • **Look for available(), mark/reset, MappedByteBuffer, or StringBuilder.ensureCapacity(...) on huge inputs.
  • Run boundary tests: 1.9 GB, 2.05 GB, 2.2 GB. If failure starts just past 2,147,483,647 bytes, it’s almost certainly int overflow.

Mitigations / fixes​


  • Use streaming parsing. Prefer SAX/StAX (Woodstox, modern Xerces) and process incrementally; avoid full-document DOM for multi-GB XML.
  • Promote counters to long. File positions, byte/char counts, and sizes should be long. Avoid casting to int. Guard any array sizing with range checks.
  • Chunked buffering. Read/process in fixed chunks (e.g., 64–256 KB), avoid single monolithic buffers.
  • Upgrade the parser/JVM. A modern parser with 64-bit safe code paths reduces risk; run on a 64-bit JVM with sufficient heap if you must build large in-memory structures.
  • Disable extras for triage. Turn off validation, DTD loading, and entity expansion to reduce memory and code paths while you isolate the overflow.
  • Split the XML at safe boundaries (if the format allows) and process sequentially, emitting intermediate results.
  • Add assertions and guards. Before allocations: check size > Integer.MAX_VALUE and fail fast with a clear message instead of proceeding to a NPE.

Minimal code smells to hunt for​


  • byte[] buf = new byte[(int) file.length()];
  • int total = 0; while :emoji_nose:n = in.read(b:emoji_nose: > 0) total += n; // total should be long
  • char[] c = new char[bytesRead]; reader.read(c, 0, bytesRead); // bytes vs chars mismatch
  • MappedByteBuffer mb = channel.map(READ_ONLY, 0, (int) file.length:)emoji_nose:;
  • String s = new String(bigByteArray, "UTF-8"); followed by slices using int offsets computed from byte positions.

If you can share the stack trace or identify the specific parser (e.g., Xerces version/build), I can pinpoint the exact failure pattern and propose a targeted patch/workaround.
 
  • 2Worf
  • 1Mother of God
Reactions: 2 users

Deathwing

<Bronze Donator>
17,429
8,456
ChatGPT 5 says,
I'd call this glorified googling. Which, to be fair, fits the vague "figure out" from my not-so-serious original post. And is a useful skill, googling is what I do for a lot of my sleuthing.

IDK, maybe I'm being naïve, but I expected more for how disruptive AI is being to our industry. Google Gemini thinks it's a thread safety issue, didn't say anything about 32-bit int. Neither offered patch files for an open source XML parser that should be in their training data.

To drive home the point, if my idiot subordinate, who can't be trusted with even the simplest of script work, was to handle this problem instead of me, he would input "xerces NullPointerException". The AI would have to tell him:
  • How to find the Xerces version in the .class file.
  • Concise and easy to reproduce test cases for a multitude of Xerces versions.
  • .patch files and how to apply them, again for a multitude of Xerces versions.
  • A version of Xerces that is known to fix various NullPointerException errors. I would not expect it to hold his hand on implementing the upgrade because it would need information specific to our environment.
  • Mitigations if changing Xerces is not viable. Which, to be fair, it did offer some. But they were incredibly generic and would be useless in my subordinate's hands.
Yes, I'm being obtuse with the prompt, purposefully so. All of us in this thread know that prompt is missing useful context. The people that the industry seeks to replace us with do not.
 

ShakyJake

<Donor>
8,302
20,773
I'd call this glorified googling. Which, to be fair, fits the vague "figure out" from my not-so-serious original post. And is a useful skill, googling is what I do for a lot of my sleuthing.

IDK, maybe I'm being naïve, but I expected more for how disruptive AI is being to our industry. Google Gemini thinks it's a thread safety issue, didn't say anything about 32-bit int. Neither offered patch files for an open source XML parser that should be in their training data.

To drive home the point, if my idiot subordinate, who can't be trusted with even the simplest of script work, was to handle this problem instead of me, he would input "xerces NullPointerException". The AI would have to tell him:
  • How to find the Xerces version in the .class file.
  • Concise and easy to reproduce test cases for a multitude of Xerces versions.
  • .patch files and how to apply them, again for a multitude of Xerces versions.
  • A version of Xerces that is known to fix various NullPointerException errors. I would not expect it to hold his hand on implementing the upgrade because it would need information specific to our environment.
  • Mitigations if changing Xerces is not viable. Which, to be fair, it did offer some. But they were incredibly generic and would be useless in my subordinate's hands.
Yes, I'm being obtuse with the prompt, purposefully so. All of us in this thread know that prompt is missing useful context. The people that the industry seeks to replace us with do not.
The point, I think, is it give you possible clues or avenues to pursue that you (not you you) didn't think of. I will occasionally query ChatGPT with problems that I can't quite figure out to get some ideas. I'm never looking for an exact, specific solution to mindlessly plug-in into the code. I apply my own knowledge, experiences, and reasoning skills.
 

Deathwing

<Bronze Donator>
17,429
8,456
The point, I think, is it give you possible clues or avenues to pursue that you (not you you) didn't think of. I will occasionally query ChatGPT with problems that I can't quite figure out to get some ideas. I'm never looking for an exact, specific solution to mindlessly plug-in into the code. I apply my own knowledge, experiences, and reasoning skills.
I agree completely. I hope that once the market matures and true costs are revealed that this sort of prompting remains cheap.

My apologies, I'm now realizing my previous response was overboard. I misinterpreted it as "look, AI can do it" instead of helpfulness.
 
Last edited:

Khane

Got something right about marriage
21,186
15,067
99.9% of what people use AI for is glorified googling..... how is that a surprise?
 
  • 1Like
Reactions: 1 user

Haus

I am Big Balls!
<Gold Donor>
16,935
68,834
I'd call this glorified googling. Which, to be fair, fits the vague "figure out" from my not-so-serious original post. And is a useful skill, googling is what I do for a lot of my sleuthing.

IDK, maybe I'm being naïve, but I expected more for how disruptive AI is being to our industry. Google Gemini thinks it's a thread safety issue, didn't say anything about 32-bit int. Neither offered patch files for an open source XML parser that should be in their training data.

To drive home the point, if my idiot subordinate, who can't be trusted with even the simplest of script work, was to handle this problem instead of me, he would input "xerces NullPointerException". The AI would have to tell him:
  • How to find the Xerces version in the .class file.
  • Concise and easy to reproduce test cases for a multitude of Xerces versions.
  • .patch files and how to apply them, again for a multitude of Xerces versions.
  • A version of Xerces that is known to fix various NullPointerException errors. I would not expect it to hold his hand on implementing the upgrade because it would need information specific to our environment.
  • Mitigations if changing Xerces is not viable. Which, to be fair, it did offer some. But they were incredibly generic and would be useless in my subordinate's hands.
Yes, I'm being obtuse with the prompt, purposefully so. All of us in this thread know that prompt is missing useful context. The people that the industry seeks to replace us with do not.
The vast majority of what people currently use "AI" for is really over glorified English language parsing search.

The real AI usage doesn't tend to be things that the everyday person would find in the least big "fancy" or "cool". Vibe coding with AI is still a category 5 shitshow in most environments.

The real power of AI will be when we reach a point with the now emerging agentic stuff where you will be able to do things like have an AI monitor a situation, then when it sees something it has to respond to reach out into the tool to respond, figuring out interfaces and/or APIs as it needs along the way. Which believe me, is coming.
 

Deathwing

<Bronze Donator>
17,429
8,456
The vast majority of what people currently use "AI" for is really over glorified English language parsing search.

The real AI usage doesn't tend to be things that the everyday person would find in the least big "fancy" or "cool". Vibe coding with AI is still a category 5 shitshow in most environments.

The real power of AI will be when we reach a point with the now emerging agentic stuff where you will be able to do things like have an AI monitor a situation, then when it sees something it has to respond to reach out into the tool to respond, figuring out interfaces and/or APIs as it needs along the way. Which believe me, is coming.
How is something like this tokenized?
99.9% of what people use AI for is glorified googling..... how is that a surprise?
It wasn't a surprise. I don't see how I conveyed that, but it wasn't my intent. Perhaps I poorly communicated that glorified googling does not get the labor savings some companies are hoping for.
 

Haus

I am Big Balls!
<Gold Donor>
16,935
68,834
How is something like this tokenized?
You don't sell Token based AI usage. You bundle it into an "artificial assistant" subscription service, make it flashy enough and optimize what actually has to be handled by the model to minimize cost, and crank up the subscription fee.