Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Jason Payne clearly you have never contributed living off Of the government for …
ytr_UgiBDLYuW…
G
It really is insanity that Tesla can falsely advertise its driver assist as self…
ytc_UgyiFVMb9…
G
Astonishing! The robot smiles, talks and wants to destroy humans, just like Hill…
ytc_UggCBvZZS…
G
I was completely anti self-driving cars until I tried Waymo in San Francisco a f…
ytc_Ugx8_dVtk…
G
Trying to copyright a Ai piece is like trying to take something already invented…
ytc_Ugy2mQVIH…
G
@kevinstrange6836I just thought it was a good way of showing the difference bet…
ytr_Ugwusu5Zr…
G
I think we also run the risk of skewed data by training Artificial Intelligence …
ytc_UgyvrkGjz…
G
yeah it's almost as if AI is trained on data generated by humans... what a shock…
ytc_UgwJZIb79…
Comment
I feel this ultimately is about finding meaning in your life.
Why would AI choose to kill humans. We are not at the same intellectual level with AI, how can we comprehend that it's only solution is death and destruction, that's a human result. It sounds to me the real threat is still other humans.
If I have all this time and AI makes life easy, what's in it for AI. If it can think for itself, why would it choose to do anything for us? It could decide to leave Earth entirely.
Also, does our most advanced AI know it's in a simulation? It sounds like we are trying to create a supernatural being.
This argument is endless to me, and it all boils down to one question, why?
youtube
AI Governance
2025-09-05T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzxRCNf7iGX-Q6ihgp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxB7V-AAEXABYtCZp54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzkLdbSwH3TxmteiJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwOZBw31wfLBqNrWJZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyJTHUu1jPzlRxYJoV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0yCcBvEQ528UcMcp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx23fgMhxzjkjJS7dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeHe5CH8EQHjdSlwZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx9NVRD5Mb7H5NIz6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz-7jOWrYHphDHW0OV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]