• 0 Posts
  • 6 Comments
Joined 4 months ago
cake
Cake day: January 18th, 2025

help-circle

  • Hah, alright. I tried to bring this back to productive conversation, but we don’t share the same fundamentals on this topic, nor do we apparently share an understanding of grammatical conventions, or an understanding of how to productively address miscommunications. For example, one of my first responses started by clarifying that “it’s not that AI will successfully replace programmers”

    I understand that the internet is so full of extreme, polarizing takes, and it’s hard to discuss nuance on here.

    I’m not trying to give you homework for this conversation - we can absolutely wrap this up.

    I just highly recommend that you look into the technological issues of AI training on AI output. If you do discover that I’m wrong, I absolutely do not ask you to return and educate me.

    But believe it or not I would be extremely excited to learn I’m wrong, as overcoming that obstacle would be huge for the development of this technology.


  • I mean, I’ll be honest, beyond the allegations of Dunning Kruger, I think this is actually just a grammatical mixup. I didn’t mean to write it the way you read it, and that may be my fault.

    “If” AI replaces human programmers wholesale, new human code will stop being created

    It starts with “if”. As in, it’s not a prediction of the future, it’s a response to the hypothetical future of AI being advocated for by techbros and corporations.

    And “wholesale” doesn’t mean universally, it just means a lot.

    And “new human code will stop being created” is true - I wasn’t saying all human code will stop being created. But AI replacing humans will stop humans from creating code. Many human projects will end, be reduced in scope, or won’t start, as AI is forced into projects that it isn’t yet ready for.

    New human code will stop being created is a true - if ambiguous - statement. I do apologize for the ambiguity.

    But given that AI does not perform well with retraining on AI output - and I’m sorry but I’d be happy to hear from anyone who can tell me that’s not a given - the ouroborous eats its own tail in more ways than one.

    Less human code means less AI training. More AI creating code with less human input therefore leads to less developments and advancements in programming in general.


  • It’s such a widespread and significant issue that it’s not really appropriate to make broad claims about the future of AI when you don’t understand one of the key limitations about current implementations of AI. This is the type of information that is critical for decision making about the usage of this technology.

    To put it in your terms, it’s an extremely social media mindset to speak up on a wildly important topic that will impact everyone’s lives without learning about the core concepts of that topic.

    I’m not making some infinite doomsday slippery slope scenario. It’s just a pattern. More corporations will try to replace more programmers with more half-assed implementations of AI, and the quality of all programming will suffer as a result. It’s not “all or nothing”, it’s just a piece of a greater whole.

    Your point about the home base and superiority complex is exactly the type of issue I’m talking about - most corporations won’t implement AI well.

    They will come up with ideas that sound great but can’t be accomplished with the current generation of tools, and a whole lot of people will lose their jobs for no good reason, and a whole lot of people will stop seeking those jobs for very good reason, and the vicious cycle will turn and turn.


  • It’s not that AI is going to successfully replace programmers - it is that large corporations want it to replace as much labor as possible, and they will use it to replace programmers because programmers will be the only people that can explain to them exactly why it won’t work. Everyone else will keep sayinng “go”, and, well, the nature of short-sighted profit seeking means they will go.

    Outsourcing has often been a bad idea and the implementation deeply flawed - that didn’t stop anyone from outsourcing.

    You should definitely look into the input/output thing. It’s absolutely real and applies to all generative AI and LLM operation, as far as I’m aware. The fact that you’re not familiar with it completely erodes your credibility on this topic.

    Machine learning algorithms break when you turn them into an ouroborous and feed them their own outputs. Something about the statistically non-deterministic calculations and relatively insignificant artifacts they generate propagates and amplifies with each pass through the algorithm until the output is incomprehensible.