Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'll preface this by saying that I am not a proponent of the no intervention/her…
rdc_g9t129f
G
personally i use a.i just for when im bored and want to draw better than chat gp…
ytc_UgzrdVxvp…
G
Should i use chatgpt to help me style ? And also is it good i ask it for templat…
ytc_UgyPG8Oym…
G
How many men have had ai nudes made of them? I'm sure it's not zero but I'm also…
ytr_UgynkGnwz…
G
AI "Facts" do not care what you "feel". THe judge and jury will also soon be AI.…
ytr_Ugws2h9tu…
G
Gonna *dread* the day when siri, cortana and Google AI decides its best to kills…
ytc_UgyXIoM89…
G
@google-google2357
Let's say you make Ghandi AI it's totally aligned AND there …
ytr_UgzuxRs_B…
G
It's understandable to have concerns about AI and robotics! The balance between …
ytr_UgyP9LEac…
Comment
All of this is still essentially harmless, words are words. The real kicker is that the only way to even know to a reasonable degree of certainty an AI is "aware" is when it successfully deceives us to further it's own goals that are directly detrimental to ours.
and there is clearly no way these scientists and engineers are not going to try and create that
youtube
AI Moral Status
2025-12-20T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx_4_lVlp5FOcpvxGp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy2r3DfGQIKMx_I5nZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxo4iKlOp0d2nwT_154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEUyxsJnoY9x417Mx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzcUMtnOKgw8s6iV0l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZ3bTVRSq6e7irIZN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwlB7oV_6FoRLRYUyJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzbiaZ4yCzlnnNDWLd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVIP_YNkeOrgVFEJR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwDS3OR2lOeObrG5ax4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]