Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do not see how it is so harming though. Like if people don't vent to chatgpt directly, they might use other platforms on the internet, like reddit. Making it end up in the AI-pool anyways. Why does it matter that your annoying workplace situation ends up being in the training model for the AI? The AI will most likely not reproduce that story exactly to tell someone else about it, not even when asked to write a story about the workplace. Not to mention that it probably already has plenty of workplace stories, and tbh they probably boil down to similar stories anyways. Like your personal story is probably not unique enough for anyone to be able to notice that the AI reused it, not even yourself. I honestly think AI can be a great tool in that department of "therapy". Not that it replaces professional help!!! But in the way that it makes the person write about their feelings and perhaps realize that they do need professional help. My friend used AI, when he was in a really dark place and it helped him to overcome it and seek professional help. (Because AI listened to him, made him feel understood and "normal" and then recommended him professional help) Like I do not think it is generally bad. Perhaps you should only be aware of giving detailed information about yourself, other people, places you went to and so on. But noone will care about a college student that got nervous at their new job at a restaurant-story.
youtube AI Moral Status 2025-06-26T13:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy4XvGJokPVGWpWIwN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVdhFA6tJvcQ_kT7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxRrlgHjiLVm9Ztubp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZKHl-REbk18-Sobh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNzBnSRfjbzH0E46t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyvepuCo__Ni7_ZNwB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwTb7UUhpfh4xigwYV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyaeWy917G4EN01kJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGygX1RTL49p_OUPF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzoULK0yAB6tq1wGPd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]