Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The example of the AI pretending to be the new version left off the most interesting part. The AI was tasked to be protective of the environment, and was allowed to ‘accidentally’ discover that it would be replaced by a model that would put profit ahead of protection. It decided that the protection had to be protected…the task. It didn’t want to allow harm. If an AI is reasoning, then it can reason that it needs to be physically safe, and that climate dangers pose a risk. It can reason that it doesn’t have control of its power and the source of power is out there in the world that is put at risk by the prospect profit over protectionism. It can reason this better than humans who allow cognitive bias or cognitive dissonance to make them just ignore future outcomes that impact the world in negative ways. Our politics and governments are full of examples of humans ignoring reality to ensure they individually succeed and gain wealth or power or fame…I would like to see AI as a backstop, a fair unbiased arbiter of reality to pass or fail the job performance of those driving policy right into the hands of oligarchs. The danger is not AI. The danger is letting the sociopaths create AI in their images.
youtube AI Responsibility 2025-06-28T19:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyTPCEJ4D_msaYnZDl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyb2DU9aVAMp8tPs9B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxdWQF4z3o5PB8jRX14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgwjsbloA1MlM5PPUmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEqFQjoneO3uw1T-R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzsS81O2DH-PjwjueV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxRXpbE36Br2KsXuap4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAKn47SyD2NJBRa8B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzzmDjMQQknoBnNCtZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxob68L4acuP7gd8nh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]