I’m Is there a point at which the sane part of the world just goes “maybe, just maybe we should stop, take a breath and ask ourselves ‘Is this a direction we want to be travelling’” before enacting first principles based regulation to alter the default trajectory?
I’m Is there a point at which the sane part of the world just goes “maybe, just maybe we should stop, take a breath and ask ourselves ‘Is this a direction we want to be travelling’” before enacting first principles based regulation to alter the default trajectory?
How a leader in the field unilaterally ran a test to see if there’s any pushback to him amplifying his voice via an AI trained to defer to his opinions? What possible additional warning would you need? How can we have binders of law and regulation on TV and Radio and Print, for good reasons and … fail for decades to regulate platforms?
How long can the cognitive dissonance between “there will be growth and this is all teething problems on the way to tech utopia” and the clear and present trajectory to civilization collapse before model collapse can be maintained?
How much longer will we be forced to endure performative AI ethics summits talking about aligning a technology whose makers have taken control of societies control mechanism and are long beyond alignment?
Are we so far down the drain that no global government dares to pull the off switch on X out of fear of the oligarch controlling it or the US government and the platform having zero economic upsides, barely any jobs anymore?
Can we get serious now?
Break the whole AI and Agent ketamine haze for a moment and actually talk about how we deal with this all before it’s too late?
Can we talk about how fundamental flaws like prompt injection destroy any chance for the technology to be made useful in the far marjority of hinted usecases in the next few years?
How it forces us to abandon cybersecurity and corporate sovereignty to reap the “benefits”, how the “AI Layoffs” are not because the technology is working out but because it’s expensive. - before talking about “AI adoption”.
How it’s essentially outsourcing, with all warts like knowledge transfer and disintermediation risk, but you pay for failed results too, per token, and the vendor forces you to do all the QA and shoulders no responsibility?
Can we have leaders who lead instead
of endlessly trying to just keep things running? Because we’re going to need that. We need bold leaders with understanding and vision. Globally. Not managers who just try to keep the box from falling apart, because the box is on the train tracks. | 17 comments on LinkedIn