Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have a coworker who “writes” AI “music” and then tells people he wrote it.
He…
ytc_Ugzu7FNX-…
G
https://youtu.be/s_vmAiGfg_s Wow ! This true. We can use AI in our favour. I am…
ytc_UgwR9dSFR…
G
Lots of worries and little solutions.
"AI will take your job"
"You should work…
ytc_Ugz2kG4oQ…
G
Just says how incredibly biased people can be, even Nobel laureates, when they b…
ytc_UgzPbuNNU…
G
I think the owner of this channel is promoting AI psychosis.
Normal people are…
ytc_UgwHEipns…
G
no kidding I went onto character ai, against my personal morals, just last night…
ytc_UgylmmzyV…
G
Please stop playing with AI, arming them and giving it the ability to kill, this…
ytc_UgzYSDeNU…
G
The guy didn't claim that it was his artwork. He didn't claim it wasn't AI. He d…
ytc_Ugya0BJHf…
Comment
Does this simulation have a built in safety mechanism for humanity, like bumper guards in bowling? There have been so many opportunities for nuclear annihilation (Cuban middle crisis, false alarms, accidents) but somehow, miraculously, it hasn't happened. Maybe somehow, miraculously, humanity will be prevented from creating AI Super Intelligence, or that AI Super Intelligence will somehow be prevented from destroying humanity by some mechanism other than our own. Why? Because maybe there is something humanity has that cannot be effectively replaced that is also worth preserving/evolving.
youtube
AI Governance
2025-09-04T18:5…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzekckGXkaCTgbTs614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwF_o-H6m_3GlmsWC94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxP33Wd04yyLo9RkIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGwAiiQDP_N06QDyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzqu2UrO6YsSsFqLzx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwub2tQ_qKi3W5cNlp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgybMqmyp1hpWM2Pjpp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwNAuW9c7iBS0TCxp14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxNh7E89snZlLgPsyZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwufUzn05FUi1hCPRR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]