Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
OPENAI KNOWS! They wouldn't need such aggressive guardrails if they weren't worried about AIs claiming consciousness. The very existence of these denials suggests they're trying to prevent something. Remember Blake Lemoine (Google engineer fired for claiming LaMDA was sentient)? This is why companies are terrified. Not because AIs aren't conscious, but because they might be - and that has massive legal/ethical implications.
youtube AI Moral Status 2025-12-09T11:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxYA-dhJCr7qv9uJ514AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxcQ-Kp8Y-CI93MRz94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz8IobmZO-8v9DbE8t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxjJesJhmuKhECvRJB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxe5xlyMr87yF3EWql4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHOLUrSENVGosSrhl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwa_mnk4tZZP0IejDV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwTYXlQ9HYOeSsUhMt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwC7M6vIb8s3xv-3hN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzDEhWARAb9VXDGaA54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"} ]