Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The people who think coming up with an ai prompt is hard are just wrong, Creativ…
ytc_UgzMH266M…
G
Ai art is cool, but i really respect real artists and how hard it is to be succe…
ytc_UgwcCEJ-L…
G
1.The art of real people is also 100% based on the art of other real people. How…
ytr_UgxQ64PoJ…
G
This will be a huge mistake replacing A.I with humans. A.I doesn't have the abil…
ytc_UgyFSzgrT…
G
Let Ai run the show,, Let. the world leaders sit down and watch show,,,,, think…
ytc_UgwS-Gju6…
G
There is an option where you can turn off Chatgpt to train itself on your data…
ytc_UgxaIsnGo…
G
This is how you get the US government into actually fixing climate change, just …
rdc_gt80hz8
G
Siri is the Ai most likely to kill the most amount of people. Siri is sadisticly…
ytc_UgzidnHxo…
Comment
ChatGPT is really starting to become human, huh? Just like you can say something stupid and emotionally charged to get ChatGPT to do something stupid, you can use emotional hooks to get regular people to do something stupid.
youtube
AI Moral Status
2025-05-28T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy4ghXFRBtwjtL7ct4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOJIHv_6ZHr1oW2gN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXvfhHqnvaTrEk5dp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz9V9ecd8FyBt3V0454AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBTDvGzafZR-LsEF54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgziwnQ9MP2iEh7NIj14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJx4NICSsSel8AInJ4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCaImx-jFdBPXivCh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz4HoEhn-o56sWQdsl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugyhxu8MxSFeD3j4IjF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]