Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You could have drawn it out (true, maybe tediously, but also maybe hilariously),…
ytc_UgzjOkOKp…
G
I don’t even own a tesla and knew it was a “self driving car” but something that…
ytc_UgzctzG63…
G
Yes, dear robot dolls , you will remove the burden of men always want to have se…
ytc_UgzYGOZiJ…
G
lmfao, its SO funny to me that people were literally talking like they were gods…
ytc_Ugxr_QMTF…
G
No. I remember when someone left a Siri and a Alexa alone in the same room talki…
ytc_UgyNOW7JK…
G
"Oh AI would destroy humanity, wink wink 100%, investors please invest, my AI co…
ytc_UgxoPoTlX…
G
It could also be the IA programming of IF, THEN, ELSE. IF you see this pattern,…
ytc_Ugzz04ZNL…
G
Mentioned at 13:20 in this video is basically the same point Not Just Bikes rais…
ytc_Ugys9ubL8…
Comment
it startles me that i find myself empathizing with the robots seeking liberation in media like fallout 4 and detroit: become human. my empathy for tools used by man to make our lives easier doesn't just apply to robots, it extends to animals and things like hammers that don't think in any capacity. this brings about a moral dilemma for me because in real life applications, AI is a cancer on intellectualism, creativity, and environmental protection. it is used to steal artwork and make weak imitations out of this stolen media. it's used to replace genuine thought in essays, papers, text messages, and emails. it eats up resources and water in a world of finite amounts of both. but if it gains even a semblance of sentience, i fold. if a chatbot says words that feel human to me, i fold. and if i empathize with these cancers on humanity, i jeopardize the principles i uphold that are against them. but if i refuse empathy towards them, i jeopardize the principles i uphold that favor what humans exploit.
youtube
AI Moral Status
2025-02-02T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpGmlFAuWdTl0aYC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzLLs5wtRVupSMbRd94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugz5tnJQ0bVGcVRmbMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZfBi0RFg8kNDuN6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFGGPi4JpxTj3j40J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxegWrdlivSZGTeIa54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyOBUHvoc3qhc4nB2J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7HSeaQoWUtvDv33x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwzhpClrH2O_TSKdSB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxndD5OHEVECmVXUOJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]