Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The funny this is, the absence of compassion isn't evil, it's neutral. For example if you're looking for a manichean approach then capitalism is evil because it's actively harming most people. Capitalists using AI won't be more evil, just more efficient. We don't deserve some kind of epic apocalypse by AI or nukes. At this rate we'll just all slowly get poisoned and cough to death in the next hundred years anyway. BTW thanking AI afterwards if you don't need anything after is kind of a big waste of resources. Although I think at this point it's up to the engineers to anticipate this obvious user behavior and optimize accordingly. Now for the actual utility of pre-thanking and politeness as a whole using AI: We really need to understand that despite all our efforts, its still a cosmic singularity of bullshit. Its inherent purpose is to achieve only what it "feels like" an human who sounds like it would do in its shoes. So talking in very serious professional tone, as if you were already colleagues focused on productivity is a better way to have the AI raise its own standards of quality while working WITH you. (just pretend, it's all about pretending)
youtube AI Harm Incident 2025-03-13T13:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwUjcc5ebA86ChcnuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzwHCqOatjgBzKt8M54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx03U33UD9uJCw4Tnx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwwwgOA-47lT574I2Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxPiIDmNUtgOxg1RmF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwJcvnzA9UTWZbfhNl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxPvk9B6c41fZtBy0x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxlfNha0GMt4k_xhmp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxQ0ZTP6RfnNXyUA9d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxkLBZx_34Hp8g8aF14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]