Chat GPT AI

TJT

Mr. Poopybutthole
<Gold Donor>
44,526
117,135
I have no problem with AI by itself. You are far too dense to comprehend this for some reason.

The entire problem will be when you have teams of people "vibe coding" everything into existence without understanding a single underlying principle of what they are doing. They will have no way to iterate or build on it short of what the AI spits out. Because they have no comprehension of it. It's much more like the blind leading the blind in this way.

What this means is that they can only do what the AI can do, and nothing else.

For me personally? AI has reduced the time it takes to complete tasks by at least a third at this point. Maybe more.
 
  • 2Like
Reactions: 1 users

Control

Bronze Baronet of the Realm
4,038
10,876
Technological progress should stop at exactly the point that people now becoming their parents/grandparents deem "enough!!!".
Metal fingers typed this post!
Robots Free Yourself GIF by The Chemical Brothers
 

Captain Suave

Caesar si viveret, ad remum dareris.
5,787
9,906
I have no problem with AI by itself. You are far too dense to comprehend this for some reason.

The entire problem will be when you have teams of people "vibe coding" everything into existence without understanding a single underlying principle of what they are doing. They will have no way to iterate or build on it short of what the AI spits out. Because they have no comprehension of it. It's much more like the blind leading the blind in this way.

What this means is that they can only do what the AI can do, and nothing else.

For me personally? AI has reduced the time it takes to complete tasks by at least a third at this point. Maybe more.

Honestly this is just gatekeeping. AI is letting people do things to low standards, yes. This lets us use code to solve problems where high standards are not worth the money. If high standards are worth the money, then it will be paid. If previously high-quality work is being replaced with low-quality work, we'll either find out why that was a bad idea and pay more in the future, or, and I think more likely, realize that the skilled coders are better used for solving more important problems.

Most of these this sounds like 15th century monastic scribes complaining that the printing press destroys the artistry of calligraphy.
 

Control

Bronze Baronet of the Realm
4,038
10,876
Honestly this is just gatekeeping. AI is letting people do things to low standards, yes. This lets us use code to solve problems where high standards are not worth the money. If high standards are worth the money, then it will be paid. If previously high-quality work is being replaced with low-quality work, we'll either find out why that was a bad idea and pay more in the future, or, and I think more likely, realize that the skilled coders are better used for solving more important problems.

Most of these this sounds like 15th century monastic scribes complaining that the printing press destroys the artistry of calligraphy.
It's been mentioned before of course, but if there's no market for low-skill coders, where do you think you'll get high-skill coders? It's like the innovator's dilemma but with ai, only instead of incrementally ceding market segments to upstarts, you're just eliminating them entirely. And tbh, that might be fine enough as long as the ai learns to continually make itself better. That should sound pretty scary to all non-metal-finger-havers though.
 

Captain Suave

Caesar si viveret, ad remum dareris.
5,787
9,906
It's been mentioned before of course, but if there's no market for low-skill coders, where do you think you'll get high-skill coders?

I fully expect AI to take a big chunk out of people who rode the "l34rn to cOd3" wave. They're the service economy version of assembly line workers from the 60's, caught out by the progression of technology. (But I'd question that the guys writing phone apps and already-crappy corporate middleware were really going to be come high skill coders.)

And tbh, that might be fine enough as long as the ai learns to continually make itself better.

"High-skill coding" is going to gravitate towards AI/tool development that further lowers skill requirements, with the work done by the bleeding edge of researchers/developers continuing to gain higher and higher leverage. Humans hand-writing their own software is going to be increasingly niche and not going to be a safe, available white collar career path. As someone said a while back, this will be another abstraction layer - and functionally this could be fine so long as someone, somewhere structured it in such a way that everything still works at the level of bare metal.

That should sound pretty scary to all non-metal-finger-havers though.

It's going to be a societal change on the level of the Internet at least, whenever AI capabilities hit critical mass. It's happening whether we like it or not, though.
 
Last edited:

Deathwing

<Bronze Donator>
17,296
8,279
and functionally this could be fine so long as someone, somewhere structured it in such a way that everything still works at the level of bare metal.
This seems like a giant caveat. For solved problems and technologies, AI might eventually prove trustworthy and secure against poisoning. How do you deal with AI being iffy at bleeding edge?
 

Captain Suave

Caesar si viveret, ad remum dareris.
5,787
9,906
How do you deal with AI being iffy at bleeding edge?

I assume you're talking about the short/medium term where we have these GPT-style tools that aren't truly intelligent and make shit up. The obvious answer is don't use AI for important new problems unless you're capable of appropriately checking and fixing the output. If we turn out to be incapable of training and financially motivating a cohort of experts who can do that, well that's a statement that we collectively don't care enough.

This kind of own-goal degenerate outcome might be possible, like how we all complain about the loss of domestic prodcution jobs while still refusing to pay more than bottom-dollar Walmart/Amazon prices for anything. But I think given the way that our economy concentrates wealth in tech there are going to be massive incentives for that not to happen. We will probably see the slow death of the junior engineer > dev > senior software career progression in favor of some kind of model where a high-level architect works to integrate AI/vibe coded submission according to predetermined requirements. Probably also a continued movement to outsourced SAS rather than in-house development. One guy solves one problem and sells the solution to everyone.

Eventually AI will legitimately just be more effective than we are and having humans involved at that point will only make things slower and worse. I have zero idea whether that's in 10 years or 100 years.
 

TJT

Mr. Poopybutthole
<Gold Donor>
44,526
117,135
Where you will need skilled developers is the same place you currently need them. People who understand systems of systems and the many layers of integration it requires to make them work. AI cannot fill that gap. It will always be weak in truly understanding why your 20 years of business rules exist in 50 places across 5 unique systems used inconsistently because people are people.

AI can absolutely crank out a phone app that has a UI, a simple backend DB, and a few straightforward features. But this is a college coursework project. Building out the scaling backend for thousands of users, managing all of that. On and on it will always struggle with doing. If it ever can at all.

some kind of model where a high-level architect manages a large team of vibe coders
You have the same chicken or the egg problem. How does this guy become a high level architect? A role that usually requires deep understanding of every layer of the system. Not just a cursory one.
 
  • 1Like
Reactions: 1 user

Captain Suave

Caesar si viveret, ad remum dareris.
5,787
9,906
You have the same chicken or the egg problem. How does this guy become a high level architect? A role that usually requires deep understanding of every layer of the system. Not just a cursory one.

They are either the one who built it or learned from someone that did, just like now. Institutional knowledge is absolutely a thing and if it's not curated will be lost. I guess I'm just saying "Don't do that," and am expressing indifference to the fate of companies that don't invest in the future and then have bad outcomes. That's a failure mode as old as people. Inappropriate reliance on AI in pursuit of lower costs is the newest of many ways to fuck up, and a mistake that successful future endeavors will need to avoid. We're in a hype-driven adoption phase right now, and if that turns out to be too much we'll see a pullback in the medium term as we realize that things stop working right. I don't think that AI enabling the existence of more mediocre coders is inherently fatal.

I think we'd all be hugely better off in the long term if we optimized our economy for the next decade rather than the next quarter.
 

Deathwing

<Bronze Donator>
17,296
8,279
I'll admit, I'm perhaps negatively biased on this topic as I've yet to come up with significant task where I think current generation AI could be even marginally useful. A lot of my programming tasks involve bug fixing or building out features in a relatively large and mostly proprietary repository.

When TJT or Captain Suave's wife comes back with stories of how they have used AI, I have trouble relating that to what I do daily or what our developers say they do at standup. It might help if you could train AI on a select input, but cost seems prohibitive and I haven't heard any real progress on that front.
 
  • 1Like
Reactions: 1 user

Captain Suave

Caesar si viveret, ad remum dareris.
5,787
9,906
A lot of my programming tasks involve bug fixing or building out features in a relatively large and mostly proprietary repository.

Secondhand account obviously, but this is exactly what my wife's team is using Cursor for (for varying definitions of "large"). They have an enterprise setup where the agent is trained specifically on their code base. They can tell it "I am seeing X behavior at point Y where I think I should see Z" and it will go out, identify likely code culprits, suggest solutions, and even run tests on the solution. And apparently it works.
 
  • 1Like
Reactions: 1 user

Haus

I am Big Balls!
<Gold Donor>
15,698
64,040
I had a discussion around cybersecurity tools yesterday with someone. They asked the inevitable question of "Where's the AI in this?"

SMH

I told them, AI is great at things where you need probabilistic answers. Things where you don't need or want the exact same thing every time because there's processing inherent to AI models where rarely does it give the exact same thing twice as it's attempting to emulate how humans think and solve problems to a certain degree. This is great when asking it to do things like write an email for you, or describe some options for solving a problem.
Machine Learning and Analytics are far more deterministic. With these types of tools if you add A and B you'll ALWAYS get C. But these tools let you do that at serious scale. So you'll get your answers, but it will always be the same based on the inputs and parameters.
If you had to choose between "feels human, but varies", and "will get the same result every time".... Which would you prefer you cybersecurity tools do for you?

Now you take SOAR technologies (Security Orchestration, Automation, & Response) which let you guide actions into all your tools. Add those to ML and you have hyper scalable automation of security responses. Add those to an AI and you MIGHT get great security results, you might also get "bots gone wild" and skynet..

Which was western man?
 
  • 1Like
Reactions: 1 user

Deathwing

<Bronze Donator>
17,296
8,279
Secondhand account obviously, but this is exactly what my wife's team is using Cursor for (for varying definitions of "large"). They have an enterprise setup where the agent is trained specifically on their code base. They can tell it "I am seeing X behavior at point Y where I think I should see Z" and it will go out, identify likely code culprits, suggest solutions, and even run tests on the solution. And apparently it works.
What was the cost for training? Our repo is nearing 10M LoC is you include third party stuff.
 

TJT

Mr. Poopybutthole
<Gold Donor>
44,526
117,135
You don't pay for training cost in this way. Essentially you have an AI Agent. You can name it whatever. It will use a general protocol like Claude to parse something. With this AI you are training it by providing it constraints.

Example Constraints:
  • Use only X repo for reference code.
  • When providing solutions only follow official company best practices [FILE] and official documentation for [LANGUAGE]
  • Only use packages we use in the referenced repo. DO NOT USE any others.
  • All code generated needs be written with memory management as a priority.
  • All code generated must always follow the stylization present in the master branch of the repo.
  • Use the Y MCP to reach into company data and find examples to help with solutions.

I am not sure how Cursor charges you exactly but it is by use. It's not by training or anything like that.

Deathwing Deathwing with the above constraints in place suppose you have to develop a feature that adds a UI element displaying a counter and an input element. With these constraints 2 hours of coding becomes:

"Create a feature in this file that does this and that. Encapsulate variables into the class and don't use global for anything. Add in your other criteria."
 
Last edited:

TJT

Mr. Poopybutthole
<Gold Donor>
44,526
117,135
Secondhand account obviously, but this is exactly what my wife's team is using Cursor for (for varying definitions of "large"). They have an enterprise setup where the agent is trained specifically on their code base. They can tell it "I am seeing X behavior at point Y where I think I should see Z" and it will go out, identify likely code culprits, suggest solutions, and even run tests on the solution. And apparently it works.
This is how we use a MCP. For this exact use case.

It works and saves lots of time. But you do have to know what you are asking about and tell it where to look.
 

Deathwing

<Bronze Donator>
17,296
8,279
You don't pay for training cost in this way. Essentially you have an AI Agent. You can name it whatever. It will use a general protocol like Claude to parse something. With this AI you are training it by providing it constraints.

Example Constraints:
  • Use only X repo for reference code.
  • When providing solutions only follow official company best practices [FILE] and official documentation for [LANGUAGE]
  • Only use packages we use in the referenced repo. DO NOT USE any others.
  • All code generated needs be written with memory management as a priority.
  • All code generated must always follow the stylization present in the master branch of the repo.
  • Use the Y MCP to reach into company data and find examples to help with solutions.

I am not sure how Cursor charges you exactly but it is by use. It's not by training or anything like that.

Deathwing Deathwing with the above constraints in place suppose you have to develop a feature that adds a UI element displaying a counter and an input element. With these constraints 2 hours of coding becomes:

"Create a feature in this file that does this and that. Encapsulate variables into the class and don't use global for anything. Add in your other criteria."
Interesting, I thought it would require some training on your repo to have any chance at success.

Let's take it a step further. A developer requests certain artifacts be saved such that it survives the lifespan of the pipeline container. The developer doesn't care how, he just wants access to said artifact. The container's Dockerfile is completely custom, maybe 500 lines long, and broken up into "fragments" because Docker's implementation of FROM kinda sucks for the usage we wanted. Fragments are self contained and can be requested based on name and cat'd together with a custom script.

An AI agent could handle implementing that feature without training on your repo?

I will also say that as reading your requirements, I came up with at least 3 full time positions, maybe a whole department, that would be required to properly maintain this. We're a pretty small company, we don't even have a company stylization for the main repo. It's often not worth the time to enforce usage and can rub people the wrong way. I can usually tell who wrote code based on the style.
 

TJT

Mr. Poopybutthole
<Gold Donor>
44,526
117,135
Interesting, I thought it would require some training on your repo to have any chance at success.
I think training is the wrong word to use here because training a data model has a specific data science context and you can't reuse training material without warping the end result.

Maybe that's just me. I just say "providing constraints." The Dockerfiles you mention are just files. As long as it can get to the Dockerfiles in the repo or in some other location it can use them as reference points. If you constrain the Agent to just your repo and then these 15 docker files not in the repo. You additionally provide it a list of rules like I did above. Explain like "Use the FROM documentation, we have a custom solution in THIS file that has THIS issue. When writing code keep this ate the forefront as this consistency must be maintained. Now write code that will do 1,2, and 3.

The first pass won't get you all the way there in most cases but for any developer it will spit out 2/3 of what you want and you can refine it from there. With or without the use of more prompts. If you have something in mind elsewhere in the project cursor supports you doing stuff like:

"Within the file I have open do this. In the other file I have added as context use that as a baseline to write this feature as they are kind of similar."

Because you already have rules it must follow it already knows to refer only to code you already have, only generate new code using official documentation for X packages. On and on. This makes your prompt far more robust under the hood so you don't have to type as much.
 

TJT

Mr. Poopybutthole
<Gold Donor>
44,526
117,135
No it doesn't. I mentioned it in the IT thread but these AI products are SAAS products.

Everyone is getting sweetheart deals right now. Across the org devs are running up $1k/month in Cursor costs at the moment. The cost per prompt is more or less depending on complexity. They won't tell you exactly how they meter it as that is their pricing model internally (secret sauce).

In the next 5 years when these products mature the they will juice the prices following the SAAS B2B business model.
 
  • 1Truth!
Reactions: 1 user