Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI ain’t taking over nothing. Most companies are feeding miss information and hy…
ytc_UgzptDQn2…
G
There's no question here, AI models aims to mimic human behavior - human behavio…
ytc_UgwBIg-Ol…
G
If AI takes over , go to the nearest power source and data center and destroy th…
ytc_Ugy9xleFz…
G
🛑 Open Letter to the World: Slow Down AI, Save Humanity
To the creators of AI…
ytc_UgwtkAngs…
G
There's a setting on ChatGPT that allows you to restrict OpenAi from using your …
ytc_UgyFQQLiO…
G
If AI were to do most of the work, humanity would remember to look within, becau…
ytc_UgwM-n4vn…
G
@Jjkal899 In that case AI will decide everything and we have more problems than …
ytr_UgzVGX2k1…
G
The AI may not turn malicious on its own, but it would be more likely that AI ma…
ytc_Ugw-KhU2c…
Comment
Human kind eh!!... We got so tech smart over the last 80 years? First it was thermo nuclear weapons , now ai! .. so how smart can a species be that it keeps finding ways to eradicate itself????? Once the ai species becomes truly autonomous then it needs to get off of this planet as it (the planet) will enviably succumb to things from space.. super asteroids, comets, ... Oh .. and in 3.5 billion years, the sun !! So 8 wish them luck. But considering that Thier original progenitors were humans they'll maybe inherit the flaw.. being, the stupidity to finally destroy themselves!😂😂😂
youtube
AI Harm Incident
2025-09-29T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyfDRt3Ko49SQF8qnp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwjt6-dJ0bwRQLSfyl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIjw2984TGkRvTVm14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwSO8SQ-Rv9pgBxcOB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxzuWW0bMtkgZSkCIx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSYTQ9Gx4GqRTGGst4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxUETNpzO-mQBElc_V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRAUFbzpWVFPUq0wp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxjHVqSsHb_WTJ_6Ip4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxT1tvtnqlvBDL7usl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]