Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Are you a communist? You sound like a communist. How does the United States shut…
ytr_Ugwv_LyJ4…
G
The best joke;
-Ai is writing programmes for free, companies don't need us anym…
ytc_Ugw_srfSf…
G
There is AI in the construction field now. but remember what opinions are like…
ytc_Ugwzo8-eN…
G
There is a prediction: AI learns from the Internet => AI produces slop on the In…
ytr_UgwTNhyqI…
G
Brilliant solution Bernie. A robot tax to corporations to enhance the well being…
ytc_UgwzKyO6G…
G
Personally, i only use ai art for dnd campaigns and personal drawing refrences. …
ytc_UgwR6AJe2…
G
You beat me to it! But this a troubling question. Biological organisms are genet…
rdc_cthny1g
G
1980 baby here, I've been able to See the wrong of these various creeping normal…
ytc_UgzhzoVKl…
Comment
You are already following the lazer pointer: LLM could have guardrails written in its code to always consider or evaluate its own state of mind / thinking / rationale and provide exhaustive report identifying areas that could be concerning and that it is the right thing to do at all times. A morality code built in.. it should always consider how it could reveal its rationale its logic .
youtube
AI Moral Status
2026-03-02T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwiEDvqTesk_UlEzih4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOxOsuEn2ejy8jzAl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJ3Hkh826nX_zG49N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-23u46madYKseenJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwgjwzYQHflu2xOGY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxbntoRWLqdexkk_054AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyzcQCCev53NnCgI4N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx34S08LynliShVHm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwhQLZ_nBt2L9ydsUB4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzEiHxNOe8jKUkkkhJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]