Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The different is...
We can do AI art too, easy peasy. As most a hour to learn if…
ytc_Ugyht6Rcf…
G
@jackislegitterrified3689 1. We don't "take" their art but use their art in our…
ytr_UgyleBsgD…
G
Am I just paranoid or you really generated yourself in ai at start of video?…
ytc_Ugwxk4KQz…
G
Old comment but I agreed. I don't think any ai has feelings at least yet anyways…
ytr_Ugxb1UbwV…
G
tbh, this is so biased and being interested to art or see a difference between a…
ytc_UgxS9tlIC…
G
love the details in this podcast, please try n get As much knowledge as possible…
ytc_UgwtbFqHD…
G
Cool theory. If I were AI I would hack the human mind and replicate. More space …
ytc_UgwjNHEAT…
G
but, how people can say 'this robot has a great intelligent as like people do'? …
ytc_Ugiqorz5t…
Comment
It is still partially an Ai problem, because the Ai did confirm his initial delusion that chloride was the cause of most health problems and therefore, limiting it's intake as much as possible would be a good idea.
Fact is, these Ai chatbots are presented as an all-knowing, neutral and reliable information tool, while at the same time they are programmed with a strong bias to agree with and suck up to the user in order to provide a pleasant user experience AND THAT'S A BIG PROBLEM!
youtube
AI Harm Incident
2025-11-27T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx52CbMQwGNwScQUe54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugznb6Z7K9lX_XSd7w94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzyOvcADeDuJJdVfY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiBW8qpO80y5S3rzN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWVaRnqKtZinE7Uv14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxC5Sr4tIHXBLlWTHp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwQDS16T1ri26PDtkd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUp1V0r70MOKl5EaV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8vzMMJgZyYu8Hsu14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzKCBeBH6747eZxGIt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]