Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Someone’s going to turn on the AI accelerator and things are going to get dicey …
ytc_UgzqZ-m8X…
G
If people are payed to sit around all day they will loose their purpose in life.…
ytc_UgzEagWiU…
G
I can spot all AI imagery. Because YOU CANNOT, doesn't make me stupid. It look…
ytc_Ugzk8deFk…
G
using AI as a normal, no one who isnt being paid just for fun is okay. But using…
ytc_Ugw5bIQ_E…
G
I'm glad that you mention the point of ai using bunch of energy and water. Usual…
ytc_UgwFeOt-I…
G
Dawg actually what is the point of being an ai artist??? ngga i can prompt someo…
ytc_UgzUTLl21…
G
of the world's interconnected systems. It had reached a point where undoing its …
ytr_Ugwdqje5v…
G
I haven't watched this yet, but I don't think any public AI has any power or sen…
ytc_Ugy4Q_XjV…
Comment
These AI centers are dangerous and can be abused very easily; they are manipulative, send signals and logaritms based on the programming, and provide AI answers that are not based on human logic, which includes social, emotional, ethical, and moral issues.
AI does not do this, as every human input is turned into a solution based on logarithms, and what kind of programmer programmed the AI data center in a particular country, province, or state.
I have used AI and found it somewhat artificial. Yes, it will improve, and it will be more challenging to know whether it is AI or human-based advice or conversation. This system can be used for good as well as for evil.
youtube
AI Harm Incident
2025-11-11T21:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx42SjYmbSvFWaw5wN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyUT3WSG3oN8gFXVoJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzdCvMS5BTADO3yiGJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy3g3EdG9mN9AHg2JB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyJzH9HCEctwDDbV814AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwTNkx0JqFp8DvKMcB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyKb1EvZ1aJIpQM0Xh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy6YWBstyR3kx8Z6e14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzRBQvLTrHu9LEerVp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgycWAXhe3UIrstigSV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]