Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think AI gets a lot of shit but as a alone and untalented person it gives me t…
ytc_UgzHQvtud…
G
As an artificial intelligence language model, I do not have personal desires, em…
ytc_Ugyoc6mqH…
G
Its okay AI can't handle hands or feet not like artists are amazing at it but th…
ytc_UgzQiQOM9…
G
The reason AI is 'racist' and 'favours men over women' is because it is dealing …
ytc_UgyZE_f-T…
G
What if a new programming language was designed from the ground up—specifically …
ytc_Ugzadic6M…
G
The lack of accountability for the state of his mental health by his parents is …
ytc_Ugw0HKBy_…
G
I love how the robot just made a mistake he look at the box fell off like "oh sh…
ytc_UgwWoMm59…
G
They have their biases, which influences conversation. Trust those in Tech - t…
ytc_UgwA_MJ0m…
Comment
The fear is not necessarily that AI will develop its own malicious consciousness, but that it will learn to treat us with the same selfishness, lack of empathy, or disregard that humans often show themselves, each other, and the world.
youtube
AI Moral Status
2026-01-22T06:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyJxTwvrk_nnq0f7Mt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyPsc7-l4MCpx2ymat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz7kz8dlw42wbRQ5S14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwGOADygqc8L-qMl7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwMpC-PZBZT5mwpjoB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzJ190IKZpLvLWLSWt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzZiuw259EEA7ds75t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwNWwI7cOdQ1iHG6id4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1hJ65iKsTegvcJHd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugww5RPRSXwetYx74Kd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]