Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't see the harm in talking to an AI as an accessory to actual human contact…
ytc_UgyTh4Zx2…
G
If you are verified that that letter is real the next step is to verify that an …
rdc_e9mn5fi
G
An ai can never become decision maker, they will always beed approval and as lon…
ytc_UgxdbFYt3…
G
@Andrew-zi3iw with that same train of thought, someone using ai for "art" doesn'…
ytr_Ugwk6NZoU…
G
Nunca será igual, la mirada de un ser vivo hablan sin palabras, los robots solo …
ytc_Ugx_5-ZYf…
G
Hey there! It's fascinating to see how AI technology is evolving, isn't it? If y…
ytr_UgyevVISX…
G
no, we shouldn't oppose automation, or increased productivity in general. We sho…
ytr_UgxG7iGnZ…
G
I feel like the 1% knows the world is gonna combust anyway and that’s why they w…
ytc_UgzPvFeg5…
Comment
..... Could be?, yeah! One things for sure,- A.I. WILL cause the world far more harm than good in isolated but regular occurrences, aircraft accidents, train wrecks, financial crashes, mass panic caused by misinformation, nuclear accidents etc etc........ Humanity,... Always looking to see what it CAN do, and never considering whether it SHOULD do !!🙄🤦.
youtube
AI Governance
2026-01-27T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyGWzCwGHlpdE78-Sh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyj8NDS4NEtXgvXvw54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxAkMR4UegI_aip3U54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy9EwhYKlzoBU8Ku3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvLgVtfeFuPxGoNNh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAlJn5pQuqto7bzXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzi9_4dkzB2d9gMpnN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhDOYVkkd0cWYQDC94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzgYEGEqsq4oaH5lP54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvInPQihlLeWQX9s94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]