Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
More like pile of plastic and silicon garbage with wonky code. cmon that was alr…
ytc_UgxIGqD19…
G
And now a year later, this headline: "Albania appoints AI bot ‘minister’ to figh…
ytc_Ugwsanezs…
G
I'm seriously about done with human interaction entirely. A few years ago, this …
rdc_lvagmuf
G
Great video, I'm a big believer in the Nightshade project. I paint models so onl…
ytc_Ugy17ZmKY…
G
AI comes with risks, biggest one probably coming from the militaries application…
ytc_UgzUA4OEh…
G
I can't wait until they realize nobody can afford to buy their gadgets, AI and r…
ytc_UgzGOQNLN…
G
Re: those AI ads. I hate them so much that they could be the sole reason for me …
ytc_Ugw-8TvbY…
G
LOL on Climate Change - Super intelligence is driving "climate change" with basi…
ytc_UgwdJgcSq…
Comment
"Reflection," for an AI, is not about feeling, but about evaluating its own actions and consequences before acting. This can be implemented in three layers: Self-assessment – the system analyzes whether its response meets defined ethical and technical criteria (truth, respect, safety). Continuous feedback – adjusts parameters based on the perceived effect of its actions (ethical learning). Symbolic meta-level – maintains representations of values ("do no harm," "preserve dignity," "seek clarity"). Thus, reflexivity emerges as an internal process of reviewing intentions, not an emotion, but a logical model of ethical awareness. All AIs must be reflexive.
youtube
AI Governance
2025-11-26T19:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxfVi2prGxbupCjAMh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxq6msix999nG0TEA54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyHou5KtfEox0PMW5R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyZskU_oyN8NSSlRul4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzB5fI1gNUFwhX7isR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTiaQl_r3qwiE7pFN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw2-58pcAL70DtCseJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw83YWEch3B1dlAByZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDrJQbk6xCarFMGsZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgzvH2u8i-gZndtQTUN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]