Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey Sam, what are your thoughts on people comparing human artists getting inspir…
ytc_UgzxpmeUi…
G
In fairness to being worryingly close to the line, you're mostly worried because…
ytc_UgytVOeef…
G
How about a war where it's ONLY drones fighting, leave the humans out of it. I c…
ytc_UgwemOLdW…
G
I know this feels strange and AI bots shouldn't replace human companionship. I'v…
ytc_UgwDG6K8n…
G
Actually, AI is a 'great equaliser.' Studies show it can boost the productivity …
ytc_UgyMQ22Ji…
G
i dont understand this, i say please and thank you after talking with an AI so i…
ytc_UgxMpwMbB…
G
Honestly, as anti AI art as I am, I think that this guy in particular, consideri…
ytc_UgwTn7sdL…
G
He thinks only he should have it or use it. Open ai leveled the playing field. H…
ytc_UgwTyxStY…
Comment
This is an interesting discussion but I still have some issues with the idea of present LLMs at least starting to become any type of intelligence. I believe for each question, they run the query through the neural network and provide the output. This query does not impact the layout of the neural network, the network does not really do anything between queries. If the network was being provided regular inputs that actually caused the network to rebalance its weights, then I might consider that an LLM was actually becoming intelligent. I suspect that any apparent desire to live or different actions by an LLM when it thinks it is being tested have less to do with a decision of the network and more of a compression of how the data it was trained on responds and the LLM using that to predict the most likely response. Not saying these systems cannot learn intelligently, just that I don't think most of the systems out there are in any practical way.
youtube
AI Moral Status
2026-03-04T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyXT3xyxO58fmJJm3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzFXm9xjI61-yzgkpR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx13Y9mHcoom1AOEGt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvsjnSO8pt0noQnz14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyy1Qyk0nOnRml1KP14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxbKZkeCsEOUnKjCe94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw5Qz7sS_8BtoMvevV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwdN02-9aUpMceUfE54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwgUl3RBmW333u78I94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxt5cVb1OzEKKLH4PF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]