Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I work in IT, and even my colleagues anthropomorphize ChatGPT and the like, I routinely hear people say things like "Claude said" or "he told me". I think it's so important to remind ourselves that they aren't human - I can't even imagine how hard it must be to compartmentalize for someone who doesn't know what goes on "under the hood" - I consistently make an effort to correct people to call chatbots "it" and "the machine". We (humans) are so good at pack-bonding with anything and I think it's insidious that these bots have human names - definitely seems like it's on purpose by the creators. I recently discovered the word "clanker" and use it to take the tension a bit if people react negatively to being told the chatbots are machines, usually gets a laugh. Though, I'm definitely myself guilty of saying "it lies to you", "you can't trust them" as a quick way to get people to think for themselves. I think it's inherently good to distinguish ourselves from the machines, but I think it's also potentially an issue to mirror the oppression of people through history (as the language guy is saying) - these software models can't feel or perceive anything, and I think it minimalizes real human suffering from real prejudice. edit: (21:40) Roko's Basilisk is just Pascal's Wager for robots and should not be taken seriously!
youtube 2025-09-17T12:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzbLY0_-QrQwJpbs_l4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyDfPi6tVyzd3tGtDR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxoLmVXJjZAFgjr2mV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw8-1lvyyLf60RJyK54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz7sshCKpTczJOBX8V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyO_GLK-g4Dl1Fxrdl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzy-mWip6jcytz6CiN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxcgLw3RfabPj-b5zp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyfR0PyGVTRB37P2FZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzrxZCKNSJcOXt8hn94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"} ]