Found 10 bookmarks
Newest
Mistakes I see engineers making in their code reviews | Hacker News
Mistakes I see engineers making in their code reviews | Hacker News
Interesting article relating to ideas on how to conduct a good code review. I've chosen to link to the Hacker News thread where I first found it, rather than the blog post itself, because this is one of those HN threads where the comments are a worthwhile read too.
·news.ycombinator.com·
Mistakes I see engineers making in their code reviews | Hacker News
Why Large Language Models Won’t Replace Engineers Anytime Soon
Why Large Language Models Won’t Replace Engineers Anytime Soon

In short: an attempt at a mathematical proof as to why humans won't be replaced by LLMs any time soon.

Not sure I have the background to really follow all of this, and I'm not sure this is as reassuring as some might want it to be; but it's an interesting read. Finally I sense the "replacement" won't happen or fail to happen because of facts of the world, but because business will decide it's for the best and will cause things to collapse.

·fastcode.io·
Why Large Language Models Won’t Replace Engineers Anytime Soon
Why I'm declining your AI generated MR - Stuart Spence Blog
Why I'm declining your AI generated MR - Stuart Spence Blog

I've been thinking recently about AI/LLM-created PRs on GitHub. I've only received a couple, and they've not been great. This has got me wondering if I should entertain them at all from both a software development point of view, and also an ethical point of view.

This blog post is an interesting read for someone else's take on things.

·blog.stuartspence.ca·
Why I'm declining your AI generated MR - Stuart Spence Blog
Software Engineering for Machine Learning: A Case Study
Software Engineering for Machine Learning: A Case Study
Abstract—Recent advances in machine learning have stimulated widespread interest within the Information Technology sector on integrating AI capabilities into software and services. This goal has forced organizations to evolve their development processes. We report on a study that we conducted on observing software teams at Microsoft as they develop AI-based applications. We consider a nine-stage workflow process informed by prior experiences developing AI applications (e.g., search and NLP) and data science tools (e.g. application diagnostics and bug reporting). We found that various Microsoft teams have united this workflow into preexisting, well-evolved, Agile-like software engineering processes, providing insights about several essential engineering challenges that organizations may face in creating large-scale AI solutions for the marketplace. We collected some best practices from Microsoft teams to address these challenges. In addition, we have identified three aspects of the AI domain that make it fundamentally different from prior software application domains: 1) discovering, managing, and versioning the data needed for machine learning applications is much more complex and difficult than other types of software engineering, 2) model customization and model reuse require very different skills than are typically found in software teams, and 3) AI components are more difficult to handle as distinct modules than traditional software components — models may be “entangled” in complex ways and experience non-monotonic error behavior. We believe that the lessons learned by Microsoft teams will be valuable to other organizations.
·microsoft.com·
Software Engineering for Machine Learning: A Case Study