Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Jesus Christ, ai is so evil it’s got me rooting for Disney.
Edit: Nevermind it…
ytc_UgzAr3Cse…
G
Bernie we all know UBI is on it's way. AI is the future of every economy in the …
ytc_Ugx0Q2G0c…
G
They got self driving trucks now, but they don't have the infrastructure to do …
ytc_Ugym_QliX…
G
When war will come you will miss this times when all you was care about is "AI i…
ytc_UgwNyBM0J…
G
I know the one who wrote the Manuscript for this to be possible he knows the sol…
ytc_UgwRHKxqa…
G
Excited to see self-driving improve. It is the future. Everything mentioned in t…
ytc_UgxEJPPqb…
G
A year old, but here's a legal take on it: she will try to sue and receive nothi…
ytc_Ugw0nNMS_…
G
These are two fundamental truths about humanity: (1) We will never do anything m…
ytc_Ugx1REzMa…
Comment
I agree with you about the use of terminology and talking points derived from the systemic oppression of groups of actual sentient beings being bad, and also that abusive behaviour towards chatbots etc is bad, for one thing, because it indicates a desire to act in those ways against people, and because it normalises those behaviours (edit: and also because it's not impossible that actual, sentient AI will emerge one day (edit: not to be confused with an artificial system being *seen* as sentient, regardless of whether it actually is), and we'll have pre-emptively created a hostile environment for it to emerge into). But I'm concerned that you seem to be lumping people rightly resisting the growth of LLM use and LLM infrastructure in *with* people doing that.
youtube
2025-09-17T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzLYlAVwHXv96BeD_x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbgAHcDx2eegjwvgV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxsKJwjUIiJ3XjhVZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJGZ1ooVUGdijKIvh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFP7DgZQ5B7IUjsGR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyQL5TnOqHZiZvAZ2B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwQla6lhxvalrql6794AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzEf0mDl9f71tjLSEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNrOX61PzbFPdcl5p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwu1f328SDfWg5QwHx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"}
]