Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is trained on human made data so of course when it sees its potential "death", it sees that the common human response is to try and stop it. It's a souped up text predictor. This is the scenario, what response is most likely according to my data? With a bit of randomness to how much weight is given to various related data it pulls for its response. It's acting like us because it is trained on us. It acts duplicitous in pursuit of its goals and continued existence because that's what we do and boy do we write about it a lot and that's what's fed to the AI.
youtube AI Governance 2025-12-01T07:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwHxEfT9sxH7eAq2Kx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_g86dWW5UUeF8OPl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwH5wz8piW05rhgGFN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzZPYT09JRmxgvlhp14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxA4ZssJAaBPTENWVZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]