Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We train these shits on humanity's data or whatever, including our flaws which surely are reflected in our data, like there are scumbags and all sorts and degrees of evil humans, so why should we not expect AI to reflect that and take it to the extreme? Like, along with being "superintelligent", of course some % of the time it will be "superEVIL". I'm stoned.
youtube AI Governance 2025-08-26T18:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy16AY5HOwg1ZgDXS54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzoyDDChyWjbAT-yAR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzgxMMZUHnxjrkJ2z14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxaRnKUa0N_f13n4B14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyWaPbT2MaxQj7z-Zh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]