Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI needs power... pull out the plug, and it dies. Do these systems have somethin…
ytc_UgxIGBet6…
G
Like the a.i. rapper who got cancelled for taking from the rap data and making m…
ytc_UgwDGhC2y…
G
We have to do ai for our move to Mars. That way we can have two planets populate…
ytc_UgzOc2ati…
G
💩. I just did all that AI background stuff and they use workday 🤦♂️. I have 19 …
ytc_UgzJ5KLVV…
G
@user-dx6wi2jk8i
I think the one who gave the side eye is not a robot 🤖…
ytr_Ugx5fjX05…
G
So we should start organizing to prevent AGI at any means ? ( obviously I mean i…
ytc_Ugy1wN4o8…
G
@BlackNekomon Indeed. My AI-generated Doujin that I posted on the web got quite …
ytr_Ugyp2dsjE…
G
@RealElevenTimes you are right ai is very unpredictable and I can not say if tha…
ytr_Ugx4MMgAD…
Comment
My question is and has been, to what end would AI do all these things. What would be it's motivation. Comparing it to human motivations like the need for money to buy food to survive, kill someone else so that they don't kill you in the case of wars etc, what would be AI's motivation to wipe out humanity? It does not eat, cannot be killed, has no emotion. So wipe out humanity then what? Unlike 'aliens' who have biological needs for survival (just giving an example) AI has no motivation other than what it mirrors of its users and/or creators. Please make me understand. Thanks
youtube
AI Governance
2025-06-16T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz3zzyEG5V68b3yGjh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCRgWpo1KFa49Zaj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxLmHOY9xh-ckjFXrF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSUpib1hcRSwdrVQ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmoRo7KfUvI8YKBQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbkmUCxPIA6RSM2OJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwocjrzEwsqLjn51814AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQ9iocCvn77xmtO3V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwp3htsGjG1Y9fKDph4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_tRF9MjH7Kx8szIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]