Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Personally I used to be HOOKED on character ai. It was so dependent on it it was borderline unhealthy, I stopped drawing, talking to friends, think for myself and not to mention how I felt miserable without instant small doses of dopamine every second of every day. Now I’ve been almost three weeks or so sober and I can say I’ve never felt better, I draw, I get creative, I get innovative, I do my chores, I go outside, I talk to friends, make plans, I watch movies and go back to my inspiration sources. Art fight in particular really helped me to get through the huge withdrawal, and it felt way more rewarding. But now with important people in my life like family and my therapist recommending ChatGPT to write my job CV and letters I have to make a firm stance and not relapse. It’s really such a different experience when you open your eyes on how unhealthy the dependency on half baked results from a machine is. Usually the argument I see for AI is “oh Google is ai and we all use it” which, I’m not gonna say if Google is or isn’t ai but if the majority of the population already has access to one safe and trustworthy source of information then why would we need to have more other then greed? Not to mention it’d be way more harm to the environment. It’d be like “oh you have a car? Why don’t you take five more beat up cars that are older models and will harm the environment more!”.
youtube Viral AI Reaction 2025-08-16T20:0… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzSSF0ZdNNcKfcQp-x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxPYhWAQlCVjFZaGHl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPDLFLhKfqXNsJmiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzKvVcAs8BcYXIPos14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxR98c-nfqqK0BY4tV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwiXO1KXs6K2UwkDDd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxEOFmpvbpcwhwNAw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyjg43ToIIRyA4C1Q14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxrM30763CK-hx9Sjl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgztR17MNvF0ccTUFDh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"})