Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai is boring and then 5 years later, they're researching algorrithmns that make …
ytc_UgyGPycGN…
G
Humans be like; ummmmmm, we know every AI movie, book, story ever made is a cau…
ytc_Ugy_wBnTf…
G
Join Income Movement's "No AI Without UBI" campaign now! Everyone's comments are…
ytc_Ugxtpk0oY…
G
Eye witness testimony has been shown to be riddled with errors, yet we still use…
rdc_h53btu4
G
Who will be making purchases if AI will take all of human jobs?? If it is to re…
ytc_UgzfG8ons…
G
@wakeoflove AI generation is trash. It's not any better visually than human wor…
ytr_UgxZKb05h…
G
Beacuse microsoft started a war with israel by killing 1200 people and filmed it…
ytc_Ugx4DPiJ2…
G
And the funny thing is all these scientists and engineers are racing to advance …
ytc_UgyKiepod…
Comment
It could be said the three most dangerous things created by humans to date are: (1) government, (2) institutionalized education & religion, (3) artificial intelligence. Losing humanist (altruistic) knowledge and control of either one of these could lead to catastrophic consequences for humanity. Discuss amongst yourselves.
youtube
AI Governance
2025-12-02T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw-AX0RVmBY4ZZFSlN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzadQjm93sEpxkTkAV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwOZOPP8OrTv5Tb2qV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwouQSDDrsQyYt_Uz54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx1VT21ZIIGbJPn1P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz89833lUUpHcicJmF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTsulXhNoCzlpVQ6F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQfAaBkud45-G_B8t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxNwBBXyA85WOtIu-d4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxXrTgSSsgPuf4ofMF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]