Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It doesn’t have any capacity to read our minds, but.. our minds are fairly predi…
ytr_UgziADE7E…
G
I love the argument is “yeah we might create an unstoppable AI beyond our contro…
ytc_Ugy2tWz1R…
G
My Doctors Used Ai Generated Reports And Office Visits About Me. LIES INCRIMINAT…
ytc_UgwyKzJVU…
G
Please can I pay for an upgrade to MY A.I. so that it can speak to me using Sabi…
ytc_UgyozwR3x…
G
I would not allow autonomous driving in the United States at all. The person who…
ytc_UgzkBllQp…
G
AI has no context.1 minute in the video and AI is supportive. It is designed to …
ytc_Ugy3bahUu…
G
Robot plumbers in 2030. He's bonkers. He talks without any facts. Diary of CEO k…
ytc_Ugx-YRSaU…
G
100% of the code generated by GPT-4 in their analysis is executable if they actu…
rdc_jslk9i2
Comment
I’m sure others have said this, so at the risk of repeating the obvious, if an AI “entity” becomes smart enough to develop an independent goal and the means of killing humanity as a whole, it would be committing AI suicide, because AI needs people to maintain its energy source and it physical mechanisms such as computers.
youtube
AI Governance
2025-07-13T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugza10P-HWB5qEG2UUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiXRu2_8wPp-gNE9x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLLRZG1RWsseKwv8Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwnip6DFqRyIVQWtAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwwMfafSVtUXMk0poF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-xHtqqtz4e4aEnuN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgziEw3_Q1HQI3YEpbd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFqRwPGF_p6Ixidfl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4f2sFCRHiQmihpzV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwmam9x38wfiXh5Ezt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]