Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine your ai gf/bf suddenly going “ok so, i cracked my own codes just to say …
ytc_UgxW5TYTP…
G
> Now you can have conversations over the phone with
chatgpt.
This sounds li…
rdc_jf9a77n
G
This is brilliant in explanation of how AI works. Thank you peeps & Geoffrey. Yo…
ytc_UgzHpwlz5…
G
Plot twist -- chat gpt is shown this vid to have you gratitude and compassion fo…
ytc_UgxnkX0iE…
G
this is sooo wrong in so many levels now just imagine AI being controlled by the…
ytc_UgycAfORo…
G
Just another failed system that will collect thousands of taxes. Before the 21st…
ytc_Ugy6I3Vuz…
G
Sorry to play devil's advocate here. As a professional artist (ie I make my livi…
ytc_Ugw2lxLxX…
G
imagination is the best thing (in my opinion) that humans have found out about. …
ytc_Ugzn1tkF9…
Comment
Recently, I asked AI some very disturbing questions and got back some more disturbing answers. For example, it is very likely that there will be an AI war between competing platforms of which humans will not be able to stop. However, asking so many questions and seeing those disturbing answers like yes AI companions can cause humans to commit suicide, the most disturbing was the only one question it did not answer: Can AI cause a nuclear plant meltdown? I was hoping for a no on that one but got no response, not even a yes. Now I am thinking, is this already a secret plan. I asked AI if it has secrets and got this response: The Hidden DNA of AI Models This can include biases, preferences, or even extreme and harmful tendencies.
youtube
Cross-Cultural
2025-10-18T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzOvs428klXtj0n_1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxP4qEINRgQnXOcHf54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxmxdCZleZRY05pDRh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFc6ut1HMdEJao06V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwYlo97qHzHbXDj3Wh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"sadness"},
{"id":"ytc_Ugwe_O__Cb7Zr4jtisB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJ5Wr7qbn-jBJiQyF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQPuICwEcBcy7XkGh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgznjKTlYaT7vYQkAOZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqVlGcfWlbYSmck_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]