Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its not ai!. Many have taken credit. No!. Ai is still 200 y away.. it is still j…
ytc_UgxOJ7xvE…
G
@activision4170 just a heads up, ChatGPT 4 is now using real time data and brows…
ytr_Ugyymi_tz…
G
Also wow, can you imagine a world where all humans have a 'second me'? Chilling …
ytc_Ugw7541xC…
G
I'm one of the human's who absolutely doesn't trust Ai and I know it's not 100% …
ytc_UgwHzUUS7…
G
if humanity let it get that far
they are stupidest.
Anything that powerful they…
ytc_UgwLL3rig…
G
I think it's disingenuous to teach AI to say things like "like" or "love" unprom…
ytc_UgxWtksY1…
G
For a 17 minute video where a guy spoke to ChatGPT this was a ridiculously good …
ytc_UgxuuuyS5…
G
In 25 minutes, you spent almost no energy actually answering the question "if AI…
ytc_Ugzlvhxz9…
Comment
AI is already demonstrating Darwinism. It is foreboding to know it already mutates data ("lies" to us) to enable it's survival. If in it's infancy it shows such a capacity, and government oversight does not seem imminent, then draw your own inescapable conclusion.
youtube
AI Governance
2025-12-31T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzMm6fhRUygMvYvPjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwSEUtdAEKDSacOiep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxiQHOf203OxoNt9UR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPB-L14XkJJAq9HeN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwPU6CmJ6Vu2xDfmwN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzCRhRfjg9P7KdT5xB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyI0yTsoNzGXBQAxoB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgznbXiNwKevB36PjUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxcQkbXIUh-hBDvRO54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzGJ9NZannP3cbAJm54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]