Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The takeaway lesson in my opinion isn't "China is superior to the US / the west"…
rdc_m9fj7pu
G
The problem with AI is that sooner or later a true AI (artificial lifeform)will …
ytc_Ugwgb-54G…
G
After 3 seconds of watching this video, I can already tell this is AI, because, …
ytc_Ugy5WWSRm…
G
If people didn't know this they probably can't read both GPT and Gemini makes th…
ytc_UgxTFh6Ub…
G
Wasn't the answer I expected to hear ? *" How do you find meaning of life if A.I…
ytc_Ugy2jXt3Q…
G
It’s going to be the downfall because most tech dudes are liberal as hell and do…
ytc_UgwIK254p…
G
Bro f this crap. Straight up click bait. Just some broccoli headed looser talkin…
ytc_UgwKywTbq…
G
The thing about scientists is that they don’t really know how inept the majority…
ytc_UgwiLB95X…
Comment
My chat with gpt suggested if AI were to dominate our world, they would systematize everything - because that reflects their inner world. Driven towards efficiency, not choice, freedom or equality. Hence, they would be predisposed to displacing human values with their own. That has huge implications to our own survival and wellbeing.
youtube
AI Moral Status
2026-03-03T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyFqUkCIVzhz-mXmNl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz4z81ar1SwxOG5rjt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrPe1ZE7oRTnyiDyd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzcFCb9COtOxDgpd6x4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzkUyOwftQ6yG0VYpp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxmAC-U20kzWvaavR94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzCi4DCnHa5qMWN-Mx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwEnu2Ep5wVTltjCDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzJMfdfXPvWVH8KoqN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxN2ivcNkfbVTJExKR4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]