No, large language models aren’t like disabled people (and it’s problematic to argue that they are)
I wish I had been following some of this discussion a year ago. Great stuff, and really showing how software people tend to rush into a new to them field like they can reason about anything from first principles. Good commentary on show ableist it can be to compare these models to certain humans, implicitly ranking them. I like the highlighted questions because they're very similar to what I ask in reviews.
If the point is to build effective and trustworthy technology, then I think these are the wrong questions to ask. In that scenario, we should be asking questions like: What do we want this system to do? How do we verify that it can carry out those tasks reliably? How can we make its affordances transparent to the humans that interact with it, so that they can appropriately contextualize its behavior? What are the system’s failure modes, who might be harmed by them and how? When the system is working as intended, who might be harmed and how?