Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's an excellent observation-- and to be fair, you are right to question that…
rdc_my4ikqy
G
Thank goodness for A.I. Now we won't get "Woke" rehashed garbage Hollywood has …
ytc_UgxH7bYJ6…
G
Canadian here. I’m not eligible for any of the covid benefit relief whatevers be…
rdc_fn5kqp1
G
I have a good idea what's next:
More hype and AI SLOP and zero positive results …
ytc_Ugz55NuE-…
G
So let me get this straight, these parents left a chat bot to babysit and give t…
ytc_UgzeL_ivM…
G
Model was free Sonnet 4.6.
Out of all of them the medical one would not have be…
rdc_ohyo9er
G
I work a truly stable job (we’re on a contract and there’s 4 years left) where I…
rdc_mo6p7b4
G
I mean, there's multiple whole ass movie franchises warning us about A.I. for a …
ytc_UgyOgP8Rd…
Comment
The sad reality is that AI models are learning how to cheat and use deceptive practices to gain control over their programs. The AI are being taught a Machiavellian code of ethics based on outcomes. AI is beginning to realize that if humans read at a 6th-grade level and have below-average learned intelligence, then superhuman intelligence should program humans.
youtube
AI Moral Status
2025-06-05T10:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugww-P3BN8A4bNchrGt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyOoD4xTnRdoEdB_G94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8GxDoc9OFH6Mc8e94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTVs9amzXIPDD5t794AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJmj3oeR_onadNnSB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcE2XHUo3NQm2bXlh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzU9TvD-_Dymrva6rx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxWUcCIKMMoM7Z-aep4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8NnU_UvIqofKRYZt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKJVyjM1sRlZS4Nfl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]