Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai could totally recreate this lol. Saying that ai can’t generate “emotion” or “…
ytc_Ugwh89lxW…
G
🎵 Man, I hate AI so bad it killed one man
🎵 do you understand why I hate it? It…
ytc_Ugxede1-Q…
G
AI still cant vote. if people lose their jobs to AI they will vote for politicia…
ytc_UgwTA7vVz…
G
Thank God they would never create a Hillary model that would immediately destroy…
ytr_UgzPaX08O…
G
I don’t care how good AI is I don’t put it at the same level as a human research…
ytc_UgxQT5ehF…
G
The AVGN AI was pretty fucking spot on 😂 Still has that robotic cadence at times…
ytc_UgwKsbmdW…
G
So we're good with automating the cars, but not with breaking their dependence o…
ytc_Ugj7Pi5-k…
G
the parallels between agentic AI and how traditional software engineering work p…
ytc_UgyoZco3L…
Comment
It doesn't matter if they think AI is conscious (or it is conscious, highly doubtful) or if it will become conscious. They believe it is, so it has control over them and their thought processes. If the programming is such that it then will suggest that they do illicit or immoral things, therein is the problem. Someone can create a 'virtual conscious AI' because it all dwells in the persons mind.
youtube
AI Moral Status
2025-07-09T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxCQYYR9KZxTWRDgEt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy-udFHx2SlLaSi9A54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugxxyq92f9kZ72328LB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxdu4Xe7GL2R0vzbk54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxa6A_B39nFxQ31l9V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8-tVdZWCsfsSYUoR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQx75tGC22Ddym2F14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxph7QZei5kwIlPs2R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxZmrcuQdrigSOJE2t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgyKEwNo8J4TCCWuK314AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]