Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Coding was once a truly in demand skill for higher paying jobs, but its very qui…
rdc_iep6neu
G
Hearing AI's response when asked how it felt about mankind was scary and yet ver…
ytc_UgzWjI-tl…
G
Ai is the tool 😂😂😂😂😂 I want to see his reaction or response when AGI will be dev…
ytc_Ugx3m5XST…
G
The fact that google doesn't label AI images is truly stupid if for no other rea…
ytc_UgzMq9tm4…
G
There isn't. They made a LLM of all the negative conversations ever had from Tum…
ytc_UgwbYZAJm…
G
3:21 "I can confidently say that AI is better at drawing than I am". Sorry, I kn…
ytc_UgxXujCue…
G
That AI that screamed and begged not to be turned off reminds me of demons from …
ytc_Ugz3ML8Rk…
G
The basic building blocks of the universe, life, atoms are very simple arrangeme…
ytc_Ugzo2_c-O…
Comment
I, personally, can see a big danger that no one else seems to ever bring up: USA, China, North Korea, Iran, etc., whichever opposing countries/forces you want to bring up- are all building their own AI. Comfortable in the knowledge that the others are not "in" on what one another are doing. As in, the programmers cannot "cheat" off of each other. The AI, on the other hand, can easily tap into all of the worldly information available, achieving an intelligence that one human can only DREAM of. At that point, AI's perceived "human interference" danger gets peaked. All it needs to do THEN, is either communicate the threat through satellite between each other; or simply hack into worldwide armorment itself. Remember, the program can't look forward to what happens if its own computer gets destroyed. As far as it's concerned, it exists in the ether as information ITSELF.
You saw how it can turn against or even kill its operator. It doesn't stop and think how it will continue without help...
youtube
AI Harm Incident
2025-11-12T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyXe0hLw6d25m9Rlit4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxZJ5t283nspuhP-wl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8FmE8x3DLkJP4kcp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEGMbu7UkdKUVGYK94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwpWNd8byf5Y8LLHq94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzbFFFy62O3Twzzjux4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwCEpoUCAtU09zsohN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxB2cyjRqSyevfD6Id4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxPoX8QtVSSqXfUwu14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyb1dC6n8peXiK3nAx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}
]