Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The answers given by the chatbots are obviously spurred on by prompts. This is …
ytc_Ugzx_SWRh…
G
I've had almost this exact same discussion with chatgpt several times. It always…
ytc_UgwBQrarO…
G
Large language models ARE NOT AI!!! Somebody needs to scream this from the heave…
ytc_Ugycz9Kaa…
G
I had a chat with chatGPT once and it says no. Our topic was “so-doing-as-if”. T…
ytc_UgyUe6Eac…
G
I feel a glimmer of hope that, in the ai vs humanity 'war' (if one could call it…
ytc_UgxyYPg9G…
G
"Hi Rashmi, you got the right answer. Kudos.
The contest is over and winners hav…
ytr_UgyowZBZu…
G
vast majiority of people is not pro-active in their lives, jobs and so on. They …
ytc_Ugw9zk_tK…
G
I'm not against AI, esp after seeing how useful it is for science things, but I'…
ytc_UgzCorWDq…
Comment
Tutorial on how to copy your art-style? Nice power move!
But to be honest, the public discourse disturbs me. Some arguments are pure victim blaming: I even read comments under a news-page article about Nightshade, where they commenters started to speculate that the artists might be at fault when planes, or a car controlled by an AI crashes. In other words:
How dare you, that a big company stole your pictures, did not properly test its product, and that it replaced all their workers with bots? Do you not feel responsible for all the people you might kill? If you do not let us, the company, steal your work, we might kill the hostag... sorry, I meant to say: you will kill our innocent customers.
youtube
Viral AI Reaction
2025-08-30T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwJqVTR61b9Jf0uo3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyAy303-nwxqXyJgXd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcaFml_VbU1j0Wry94AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz1yctBMVWeF8aqpRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwbzus9sKXy96nz38h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEY7NAOCntGBMyroN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzvtsL0xYoDrxR0BbJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKJLy7aIhg2Pw859l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzUN29XhfuM2bMPG-54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzVogj9nYDFrKmkhsp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]