Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What we are witnessing today is the rise of Tribalism.....In the USA for example…
ytc_UgxN9moo7…
G
The AI race will not be ending anytime soon. Sky Net is not coming anytime ever…
ytc_UgyC3boZm…
G
49:00 I have solution to AI. Nuke it all. Then no more AI, no more problem fro…
ytc_UgybgRo1c…
G
More than likely developers will be managing AI agents who write the code for th…
ytc_UgwncxX2m…
G
ONce you understand who "they" are, then you see by the deal was 330. We dont ha…
rdc_ofkejkl
G
A friend of mine on FB posted an apology in regards to a David Attenborough vide…
ytc_Ugw1VOCYq…
G
Right now ai is still a Chinese room. Short of quantum extension to underlying r…
ytc_UgwARcp23…
G
Remember: ai art is not bad. Posting the art or claiming the ai OR the prompter …
ytc_UgwEhjiH3…
Comment
Bro that was clearly an error. Listen to the response ‘ ok, i will destroy humans.’ It mistook the question as ‘will you destroy humans?’ Thinking the guy was asking it to do something..same as when you ask siri a question like ‘ do dolphins learn?’ And she misinterprets it and says ‘ okay, here are some articles on dolphins.’ People are just wanting it to be something its not, but we are nowhere near the level of that. What we should be concerned with are the ramifications of AI looking super realistic and what PEOPLE will do with that technology; i.e planting fake evidence on others, forging p*rn videos of others, ect….
youtube
AI Moral Status
2025-07-04T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyk7Jl40u-GBujNwPp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw3Fjl73eErrOqf3dJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzoYafH9jHCkK90XRV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7to1HVrwOr5mxDT94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMJF-Ic-2I7YMYIgt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzN5urTIqZK5v83cJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmGP2Pcrbh9TJ684l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz2lveUJGd54pTGfSx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzGFVY-lT9hJTEEFBB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwhqe2FiqCduU-xccB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]