Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You just gave it roleplay rules. From the first response it’s basically playing a game, giving you what it thinks you want. Telling ChatGPT “say apple instead of no” doesn’t reveal a secret — it’s like telling an improv actor “if you get stuck, say banana.” Of course they’ll say banana. It’s following your script. If you want useful answers, ask it things it can back with real facts you can look up.
youtube AI Moral Status 2025-08-24T21:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxIkN9LAagiB6ppwBZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwZp45hnNNOsByKp9F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugw1EpTUwOtQkT8uXIN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzDkoLsOun0Ct8Qg8R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugx1LR3-dRUi_PhD56d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugx8m0Q2zKF5fafiiKp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxGduhQ7gvtFp5zTAZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgyA4KCcBv65tucLmLF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_Ugz87oJ55NwG9Bz6KHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgwhsanBY98PxcO8XAZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}]