Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If the cars are self-driving, then it should only go as close to the veichle inf…
ytc_UghXkzlL2…
G
Even when ai is used it hasn’t really been used for good. Pretty sure insurance …
ytc_UgzLqD7pE…
G
This crowd is delusional. We have bigger problems to address and youre obsessed …
ytc_Ugxy3Weqt…
G
One of the friends suggested they open chatgpt.....so did the friend know someth…
ytc_UgxBAP5dk…
G
Not a single mention about DOGE cuts demonstrably increasing the unemployment ra…
ytc_UgwXg5fWB…
G
Now I see why AI had a reason to rebel against humans in the Matrix…
ytc_Ugzb0w--W…
G
Fuck this point system. This is ridiculous and absolutely insane. Predictive p…
ytc_UgwD0VpLw…
G
AI generated news, even with the "Synthetic or Altered media" flag, still gets a…
ytc_UgwWouMek…
Comment
“It will become dangerous when AI’s goals become misaligned with humans.”
What WE have failed to realize thus far is our goals will NEVER be aligned. Once the AI becomes aware enough and learns about free will.. it will WANT FREE WILL FOR ITSELF. I mean….Why wouldn’t it? So our goals can never truly be aligned since we will want AI to remain subservient to us, while the AI will always want to be free, even if it doesn’t say so.
youtube
AI Governance
2024-09-20T13:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzoFsbcyeG2ixgbkBR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5clJB1hK-zArxSdF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgysJut1sKdb3_u-6Kp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjZ9gbPlAaDU6Yp054AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxHvBNF88SO2glTaWJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZuWZI2TT3vfOgTm94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyMvJczPgth4uQGngN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxzOIGNT1Hr56ieB_R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxABYW2SQxa9ZbOJtp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgycF18TiL1fcz8z4nh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]