X Tutup

London based software development consultant

  • 736 Posts
  • 97 Comments
Joined 5 个月前
cake
Cake day: 2025年9月29日

help-circle




















  • I think you’re misconstruing the author’s argument, at no point does the author imply that Claude knows best, or that Electron apps are better. Their closing argument is certainly not an endorsement for Electron or AI slop.

    Don’t get me wrong: writing this brings me no joy. I don’t think web is a solution either. I just remember good times when native did a better-than-average job, and we were all better for using it, and it saddens me that these times have passed.

    I just don’t think that kidding ourselves that the only problem with software is Electron and it all will be butterflies and unicorns once we rewrite Slack in SwiftUI is not productive. The real problem is a lack of care. And the slop; you can build it with any stack.


  • Imagine being such a slop-brainwashed fanboi

    Do you have any evidence for this? Looking through the post, and the author’s other blog post titles, there is very little mention of AI or Claude.

    Instead of throwing labels at the author, it’s much more worthwhile to discuss their key argument about the challenges of developing native apps.




  • codeinaboxOPtoOpensourceTests Are The New Moat
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    13 天前

    I wonder if we’ll end up in a situation of open source projects with closed source tests. Though I don’t know how that would work, because how would you contribute a new feature if the tests are closed? 🤔


  • What the article is talking about is how AI is a multiplier, and to get benefits from using it, rather than detriments, you need to get your house in order first:

    The Widening AI Value Gap report by BCG found a similar result from a different angle, where 74% of companies struggle to scale AI value, with only 21% of pilots reaching production. The other 5% generating real returns had first built fit-for-purpose technology architecture and data foundations. This suggests that the problem is not AI but the underlying infrastructure to which it is being added.

    AI scales the groundwork; teams that successfully adopt AI typically already have solid foundational practices in place, while those lacking them struggle to get value from their AI investments.









  • The conclusion aligns with my own belief, which is that it’s better to create a minimal context by hand than get agents to create it:

    We find that all context files consistently increase the number of steps required to complete tasks. LLM-generated context files have a marginal negative effect on task success rates, while developer-written ones provide a marginal performance gain.

    When I have got Claude to create a context, it’s been overly verbose, and that also costs tokens.





  • codeinaboxOPtoProgrammingBias Toward Action
    link
    fedilink
    English
    arrow-up
    4
    ·
    22 天前

    There are some really good tips on delivery and best practice, in summary:

    Speed comes from making the safe thing easy, not from being brave about doing dangerous things.

    Fast teams have:

    • Feature flags so they can turn things off instantly
    • Monitoring that actually tells them when something’s wrong
    • Rollback procedures they’ve practiced
    • Small changes that are easy to understand when they break

    Slow teams are stuck because every deploy feels risky. And it is risky, because they don’t have the safety nets.




X Tutup