Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dear artificial intelligence and robots please listen up this is an important me…
ytc_UgxM3S_50…
G
they tell us that companies want applicants with ai skills but do not go over wh…
ytc_UgyfqfTdc…
G
Ugh, this feels like AI proaganda and it's unfortunate because you would expect …
ytc_UgwK9PQER…
G
I've been playing around with this idea.
It's so much fun. The folks that built …
ytc_Ugzc24aLu…
G
The male AI robot was talking about taking over soon as he spoke he serious abou…
ytc_Ugx2SEC2I…
G
It's great you're finding value in automated insights, but relying on an externa…
rdc_mbn2k4u
G
Yes, all 50 states require vaccinations to be in school unless you get a waiver.…
rdc_eicmeds
G
Unfortunately a lot of articles make the anthropic study of the ai's "TRYING TO …
ytc_UgzZD4jiU…
Comment
AI is what it is. Neither good nor bad.
How we use it is up to us… good or bad.
The military love it.
Organisational institutions like Government’s, the FBI, MI 5, Interpol, U.K. police, ICE, etc all use it to watch us.
But we have no need to panic.
AI has one, fundamental flaw: we learn from our MISTAKES and AI doesn’t make mistakes.
And like any actor… it will, eventually, run out of lines.
Unlike humans, AI has a limited amount of data!
youtube
AI Moral Status
2026-03-10T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxHxdIx1hGdm8vXPAh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgynrkaI9IhizR_7UCN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz496MHO-v-8bMDHFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7FSLH08PSXwZ2_594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzNNAddUv_-2w6D2xZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzxdvUUKLPJr7tp3VV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7mBJc_g0STyUV4EJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx8-tRYJKM4DNtqAAF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFMz8MgXrVwOD0Ajd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzTB9IStAJBGWy-i4x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]