Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I promised you this is all cryptic and they programmed these responses in additi…
ytc_Ugycvd2Ji…
G
Current AIs are programmable, and the inevitability of AI evolving into sentient…
ytc_UgwaO-a1p…
G
The only thing achieved in 6 months will be nothing.
Apart from nothing and no…
ytc_Ugw2CXobo…
G
@grae_n The role for artists will become more creative and less technical. What …
ytr_Ugz5g_x-q…
G
so you are an ally of A.I it seems. giving it so much to learn from so it can be…
ytc_UgyrD4M-H…
G
"A computer can never be held accountable, therefore a computer must never make …
ytc_Ugxn4UGSu…
G
Everyone has some kind of creative talent. Even then, there's always work that c…
ytc_UgzoO3rgx…
G
You can be great at art if you’re willing to put in the work. We artists struggl…
ytr_UgwzvHSsb…
Comment
Thanks Hank, I have been trying to get people to slow down and proceed with cation when dealing with AI. The hype machine wants us to think we need to win some "race". That imo is the worst way to deploy and develop AI. No one can say what the world will look like in 3-5years and if there is an unseen cliff ahead, you don't want to be "racing" ahead with reckless abandonment. As Hank asks - is AI really helping humans to thrive. That should be what we all ask each other as we develop and deploy any tech, but especially Ai.
youtube
AI Moral Status
2025-10-31T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyZZEpDQ4Fol_rRz3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnhVMdx4H5KG97R914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgoTu7UFS3CUEDwlF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8w9Zsyzc24y2przp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxtR4Pt8nUMCs_ZJ3x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyC5Gw2e__-OdtBDZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydqfQICatDtEr9AZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyrDlVgZczTRreG_al4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXw9i7ZA1Aq7C_Q0F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmnECZLmYxsytfsqR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]