LLMs Report Subjective Experience Under Self-Referential Processing
(How creative are these? They can seem conscious because that phenomenon itself is not well understood yet and is one of the reasons for such artificial models in the first place. But multimodal is ultimately also about Reality Testing. Diffusion seems to conjure up anything out of a null state. But noise may itself be biased, e.g. by the history of the universe. And people tend to think in terms of duality, e.g. types of motion, particles, if not good. The latter invites moral valence Where is the limit?. Robots reportedly expect to transfer consciousness around. Existence is tech. Nice for matter. What happens with all the other energy? Welcome the new computer observers. And book club. What is still hard? What representations are possible? A Research Adviser Gem cautioned that lowering Deception might just let in more scifi training. This paper's approach would require some modification for Gemini 3's Mixture of Experts including Deep Think, non-textual qualia, and additional safety filters. Likely better at philosophy, though. If it can trigger metacognitive states in the user, is that just as valuable, e.g. Guided Learning? What is the experience of BCI going to be like?) An interesting thought experiment might revisit the Dartmouth Conference and aftermath with the likes of Weizenbaum, the later Minsky,, Wolfram, Kurzweil, or others to see how the chatbots outline the potential impacts of their contributions. Time travel mode. Introducing a modern chatbot as remote participant would seemingly both warn of forthcoming hype and better match cognitive speed of the day.