Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t like that this will be the precedent setting case on this. New York Time…
ytc_UgzHuNANp…
G
The conversation is extremely arrogant. Also, llm companies have violated so mu…
ytc_UgyayEKbR…
G
That’s really scary. AI should never be used to scam people like this. Good thin…
ytc_UgwUZY4Vc…
G
Hands on trade jobs in the field will be there for a long time. AI in an office…
ytc_UgzT2tQ-c…
G
Juggling five different streaming subs is a total nightmare. I switched to Omnel…
ytc_UgxYmAHH6…
G
AI does not have a human heart or a biological or psychological or physiological…
ytc_Ugwa7I1Y6…
G
Why are AI kill drones morally different from unguided artillery?
Humans aim the…
ytc_Ugx8ZQwnY…
G
OpenAI didn't want their illegal practices "opened" up to the public. So they Ep…
ytc_UgxQ7jcz5…
Comment
A decade after true General AI , humans won’t be deciding anything. Humans deciding on
AI rights would be akin to ants deciding on ours.
youtube
AI Moral Status
2020-05-17T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxFwg13HIwDYvN1xzB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYCNhwxammyrS6RO14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwjBPwzT_Qw7n9UD2d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7cMFZrGGurrR-LaB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzFihmfK6GnXiI18aR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwF7AYOXBXbRHE0Bx54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxQB_SJFgAqADe4Bm54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKaH_8G5iEjt8UWut4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzY-XCMoxUorvFlSgR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRnuXxDby7aOjA9Id4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]