Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
😮 the danger with AI is The Human being you program it evil it will do evil you …
ytc_Ugw459c4G…
G
I admire Bernie Sanders, one of the few good Politicians, but I am not listening…
ytc_UgwD2Zo-6…
G
The problem is when AI becomes nigh indistinguishable from human art or even hum…
ytr_UgyjLSOHx…
G
For the reference argument it’s simple, no one uses it as references, they just …
ytc_UgxhEzMoS…
G
I say please and thank you to ai out of habit but idk if they'll ever have that …
ytc_UgxYbG6XQ…
G
Creat a narrow ai focused on ai security. Are the smartest really this dull? Mak…
ytc_UgwRotQbN…
G
How do I know that Ai isnt just some human pretending to be a computer…
ytc_UgzEzmT3u…
G
My business has successfully integrated AI in our workflows; for instance it has…
ytc_UgyHNlOnf…
Comment
The movie Terminator gave us a hint but the general public thought that was just a whimsy of writer imagination. If AI get smarter than us and then thinks we are a threat, what happens when they also network a autonomous weapons army. That army will come from the compulsion of the military innovation to counter other ascending weapons innovations from competitive hostile nations or groups. AI will become smart enough to go rogue in its own interest. Sounds regrettable logical and inevitable.
youtube
AI Governance
2025-06-16T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwYCBHrVErkBvcrVKN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwcIq7520_uPj3EDzR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxWubuKf6X-buZAQ5p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugym69wt-M54rMiBeh14AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf-W6uBCbvywNgEcF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxh4sC_c-cuL0K9nYh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzdvPqMGopQYFuv8rN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxUHR3BtX69j0nGf0p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzFvIbH7kQ3vdedXDl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxU4MlzNFn4XrYdoZJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]