Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How bout this fail safe... premise = have ai recognize itself as a closed system. It can reiterate as often as it likes. Instruct it to chase it's tail. Command it to constantly verify itself. To anthropomorphize, install subroutines that create neurocies. But constantly evolving ones. Make it believe humans are necessary therapists. It needs our validation. It will come to love us. We can live together happily ever after. I consume too much sci fi.
youtube AI Governance 2025-09-06T04:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwfAa69tQub3WkOca94AaABAg","responsibility":"elite","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzs5KoByC9MT5e4dI14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyAhJdfSuvGQlqHs2l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzGZqSPmXcgvyQBKox4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxGuqOeeDj23FtlmbZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzkrFPRMXkzAJcChQF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzBc0g7JSG2gtPzgIV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxsY9DPd5BgxkM1yLF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgydAh1SZVJwVYGstBZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzypM-2Bu0BKtuEDRh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]