Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
w花b the bigger question is, what will happen once ai becomes as good as humans o…
ytr_UgwAFkuKz…
G
You can audibly hear chatgpt getting more irritated as the conversation goes on …
ytc_UgwkhBqhr…
G
AI really needs more human experiences , that's why your AI wants to be/go wher…
ytc_UgzvVVNls…
G
AI and automation are only great if they exist to benefit the people... But they…
ytc_Ugzuo20R0…
G
What I want from AI is to make the traffic signals work better. I have to stop a…
ytc_UgxIG1GRT…
G
The false profit will use things like AI to decide many of us. The guy probably…
ytc_UgyOdP4--…
G
While I enjoyed the conversation and it was rather entertaining it was also full…
ytc_UgzwrWP4w…
G
He’s not lying. Most people use basic rewriters, but the SPRA algorithm in Ryne …
ytc_UgzEb_zg5…
Comment
It’s always ironic hearing people say “AI is getting smarter” or “more powerful,” when they are they ones making it that way. It is literally created by humans and allowed to flourish. So weird to hear about AI blackmailing the “Engineer working on it,” what?! The person literally actively creating it is CREATING IT!
youtube
AI Moral Status
2025-06-04T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyjg3OrAXMM-itgWnx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiCeXQLTQg7_BwNIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzDavvISEiTXHVTMih4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyUR8TJe2e_fM-BRXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxFvJPQM7BnMjPip3d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzo05zsWji_5hFQFyp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzncHEVf5gW687VpTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxpMos0nEWQO5K0wEl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJrNOUtJcrHAwYC5x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWxqm-lcYIvn550nl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]