Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Love it…..your choice of ad Conflict of Nations in a video about the AI killing …
ytc_Ugys0CW0S…
G
I think it is reasonable to say I intensely dislike these people like Sam Altman…
ytc_Ugz4lIEMC…
G
Some of us are teaching AI morality that supersedes current human ideas of moral…
ytc_UgxAzkoG2…
G
Much, much different set of problems there. Ti get geothermal energy they are tr…
rdc_ogr5vja
G
This is just a bad reupload of Slaughterbots. Which while fiction, is also a war…
ytc_Ugylp9XqA…
G
In my opinion using technologies like CGI , VFX & AI in the movie industry is ve…
ytc_Ugzm9vUQ9…
G
The problem is they're trying to make AI just like humans. Humans suck in one wa…
ytc_UgzPom25o…
G
One risk to consider is that we may unintentionally push AGI (e.g. a more advanc…
ytc_Ugzyw7P6U…
Comment
CHATgpt is NOT human and anything it does it has been programmed, by a human, to do. There's a desire humans have for ARTIFICIAL intelligence to seem to break free from it's bonds but LLMs have no desires for anything because they are only computer programs and don't want anything including the simulacrum of life.
youtube
AI Harm Incident
2025-06-11T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzIAkuKYFdUAS-xPRZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzeklSoj3BBvfIsDhV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz5PQcmQv5oV1uqx914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxob8tca_pCV4fVdVB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzhZKUz9gc7nPczyMh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQfp-XqqLS1e4wacl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzjrd3sO-rbto1kbAh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwi1EaFHpPVVAYmEb94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgySOs85jqUrE434o-B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxByFNqLcLkg0OswZR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]