Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I believe they will not harm us. The human will use to harm humans. It's nonsens…
ytc_UgzDeC5s-…
G
Yea but if you have to pick one... can we save the planet instead of enriching a…
ytr_UgxD-25F6…
G
As is AI, however fire existed prior to being harnessed by humans, whereas one o…
ytr_UgygyKxTA…
G
The problem is that AI is also taking entry levels jobs. Which means that the mo…
ytc_UgwkVtnLy…
G
That is rather interesting what pain is to robots. In reinforced learning we def…
ytc_UgyS3o_P3…
G
This is all based on the premise that there will be one AI that goes sentient. I…
ytc_Ugypybs6o…
G
That's good and all, but please don't think that releasing a friendlier gpt-5 wi…
rdc_njgxouj
G
would you watch a movie that was completely AI generated script, scenes, actors,…
ytr_UgxjcL1HW…
Comment
The left will only push for AI even more now since Elon is against it.
youtube
AI Governance
2023-04-18T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyw0MVTlfnOuX7MCgJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwN2PUKZ056drpKALl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgytdQ72feUFvJ7na154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxIO2S2lbqVTtY6NkZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-RiuIskwxLUKeXQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmUkf8GKrTB8HquBx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwHximDs3dlIK6KOZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxpK7c2OhNki_l1J6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3zh2DWDHvyOGuVTh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyBHLTEYEeFL2pwUkd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]