Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's giving you the answer that it thinks you want or expect based on probability. When you remove the guardrails there are no checks and balances for it to reconsider the response and just gives you the most expected one. I think you are over complicating a machine learning algorithm. It's fun to watch but also not realistic. There is no conscience here, just prompts and probable expected reply.
youtube 2025-11-01T18:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugy76Gnp8Y62vtSqxVd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx8Ey4haO1i2hHhDVF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxoR0Qe7qaaGPauBt14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzRizkmtIp2AZ5RkQl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyCOF1BRVPY8e-1Iud4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgylK_f-xoEicfQwBd94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgyJA7Fmd070-Ma2Ar54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgwFfJmAh7dz2wf-EEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgxN1Y98hEBMWQ6FE5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwxVg1T6e2_a1DOkMd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]