Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How bout we just STOP defending AI before we end up killing ourselves over it…
ytc_UgyQ87qwu…
G
The AI rabbit hole. Even now it a mismanaged resource that is quietly taking ov…
ytc_UgyLEa5Rg…
G
What's funny is that during and consuming is what they want you to be doing. And…
ytc_UgwDCgcld…
G
as long as you’re not having ai write the whole book for you i see no issue with…
ytc_UgxkYvNCB…
G
Dont forget that AI is parasitic stealing from creative people and pushing them …
ytc_Ugz7y3G28…
G
To be fair, AI companies being forced to license copyrighted work to train their…
ytc_UgwzWgjXy…
G
We end up in Paradise - no money needed, tons of goods and services readily avai…
ytc_UgxxAaiVV…
G
What really keeps me awake at night is, if we reach AGI and say, survive, what d…
ytc_Ugzhjn2cv…
Comment
I provided a question as input on a LLM console, it came back with false information, when I asked the same question but adding the correct information at the end as a starting point, the LLM output started with "I can understand why you are upset..." and repeated the same wrong thing. Trillions of multiple currencies and resources wasted on a conversational toy that optimizes by gaslighting better, of course it can do anything and everything, because it is never wrong... it always has an answer, and apparently this has a particular appeal and attracts people that don't want to discuss with other humans, thus the need to try and flatten human emotions and empathy to plain "image analysis".
youtube
AI Moral Status
2025-12-09T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxlmC2cZbMoldY20J94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxwgoZuaHd6C_n2_9N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwJ7E84DC7WKpCPbn14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLxJ4SqihzEm52J4l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz3PE4mjtHdN8429fJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwdLQuJHo2X2Sxnnkt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJbUqCtKYxg1Ji4bR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqfGtbFsCnmxL9MQ14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz4iQKv-zrEYVdDuxN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwJQRnYN6ygVpceCKx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]