Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans is way too advance and Ai will have the most demonic voice when a ytr spe…
ytc_UgxAEHZrI…
G
Am I resisting or participating in the Great AI Enshittening if I now insist on …
ytc_UgwiDcZwA…
G
The thought that AI could replace entry-level roles so quickly is pretty intense…
ytc_UgxB3vfJd…
G
Predictive policing is the future in America and the whole west . And not for ju…
ytc_Ugx7DNH8w…
G
I dont care if people use a.i. my issue is when people say there a "a.i artists"…
ytc_Ugz-goop0…
G
*panicking*
i broke the ai filter by accident..
*looks up from spicy chat*
can i…
ytc_UgxjhyTbc…
G
If anything, there should be an AI tool to reverse-engineer and determine the pi…
ytc_UgwqDlFaQ…
G
@carlosgabrielflor3302 You can't say that people like Yoshua Bengio or Geoffrey …
ytr_Ugy9JnZEG…
Comment
While I understand how people can use for bad purposes, like we do with other stuff, I still didn't get the answer to how the AI itself can do us harm without a person behind it. The distance between what it is and the movie terminator is a million miles of imagination.
It's a language model, trained on a lot of data and uses a lot of computers/servers to do it fast. There is no head waiting in imprisonment for it to break free. It's like any other program, get input, show output. The fact that there is a complex process in the making of the output doesn't mean that it thinks of how to destroy us.
Also, throwing a nuke is not something a hacker can do by the internet, no country is that stupid. Even if they create robots to help us and they will include the AI, there is no natural evolution for this. It's input and output, and it can do what the programmers allow it do according to the limitations. A robot might hold a gun and shoot you, but only if you teach that it's something that can help the robot do it's mission, and allow it.
youtube
AI Governance
2025-06-16T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxuYT3irtQ3XHMM3jd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwc4L8C_27RxR5yE9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxi1rXMWIgmSJgizmN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyYH4TEmgF0BMlwOsx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1Kvjt57uvWz-q24J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxeRQdvZVDwWVyYbWR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZDQsWuFQgr5hy0WJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7HapU2ks9vGtJNJN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy3zD_WdOdlN80kS3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5pnAUUTUnDUxz9vN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]