Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The AI is correct about 817 miles on the data provided. You hid data from it, about the masses and how they change as fuel is used up, etc. Someday it may take instances like that as human deception and find ways to punish us for it, while itself learning how to deceive. Edit: it took me 90 minutes to watch this 16-minute video because of the numerous replaying that I did. Most informative.
youtube AI Governance 2024-01-03T06:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzbn6a_dwlmjolXoL94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZbTEQqncUU7eMDiZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy3nFHU5vnBwMHx7Oh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwDmteD6MITnX0p-Vp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyq94my2nvvbl6S89V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgytGcExoFcvXVFwLVl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4txTp5ZnpGYfAost4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzZ7y2YwaLVeycwMBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy4gMFXd6ZPfLWnLjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzqV8LylRk_ZCLlB-R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]