Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I laugh at the fact that people get so antsy over AI. Show me a machine that plu…
ytc_UgwvruGaS…
G
@carleqq yeah there's a ton of bots in this comment section. Tends to be the ca…
ytr_UgzxF2u0L…
G
Smart enough to invent artificial intelligence, but dumb enough to think we need…
ytc_UgwBrY80R…
G
Everyone wants to know what a robot is thinking? Or what it wants? Does it want …
ytc_UgwT10Luu…
G
Just discovered channel and I’m obsessed is right down my alley funny that algor…
ytc_UgxFlzNFI…
G
I think be put all Al 🤖 be a Ranking , great A B C......... a b c ...... S m l ,…
ytc_Ugzj8juKd…
G
I think the assumption is pooled/fleet vehicles. There are a couple of inhibitin…
rdc_dbytuc7
G
Well, we’re not anywhere near General AI and anybody says we are, have really ov…
ytc_UgyRm5EKp…
Comment
I'm betting that AI will eventually stop humans from hurting each other. Almost all human problems are created by unreasonable behavior of which only humans are capable of. Entertainment almost always show AI behaving like humans because it makes good drama.
youtube
AI Governance
2022-06-30T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx4qxWJT1Lkb4w_ELF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzd-cw-_muiV1vLmmF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzMgTYvVE3ubddgIRJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzOW2MPsRaHIRTFp8N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzJldARhqfUwG_rICB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw0jQn-k3rIehzP3z94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWMRoBBP6WD3mzgz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyIXpvRehDOuJsAiQd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwhRjEachs-kJ4LUVF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzatvAAZ2scKfnjun14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]