Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
58:58 Does training AI to produce fawning responses to queries through reinforcement learning that would be abusive to a human teach it that it's okay to abuse people?
youtube AI Moral Status 2026-03-03T23:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyjmPtExQkJf4UQEXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyRec8FIFj27VDCk0F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyAjlHl4gpB5LeIOYZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzswSsAckqMsYh8BBh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgyBEWZI9N2xGBGSnjZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzarnlDb81r-NKGfJZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxFioK4nnHwKj9u8O94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyT7us7jsa0m3t8mqd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugz2I9H439Z0KTOkrgR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzAhZb6kkwA-GpcNB54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}]