Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Never had a problem with AI: a robot never stopped me from drawing what I wanted…
ytc_UgwwPXcHw…
G
I do not doubt new jobs will be created after implementing AI, what concerns me …
ytc_Ugy4ZCbH6…
G
I talk to ai on roblox and it literally suggests kinky things and then it starts…
ytc_UgxYxlLhA…
G
I challenge the widespread belief that only great fear will produce necessary ch…
ytc_Ugx-pYi5c…
G
Honestly none of this makes any sense. I think the real plan is once they automa…
ytr_Ugyc_Hu6E…
G
I’m not afraid of ai, I’m afraid of what corporations and the government can and…
ytc_UgxilnsaM…
G
I had a chat with chatGPT once and it says no. Our topic was “so-doing-as-if”. T…
ytc_UgyUe6Eac…
G
And they actually put this out there for people to view so we can know what it l…
ytc_UgyR7-UBU…
Comment
This is all just regurgitation of original content that the AI was trained on in response to prompts, nothing more. There is no intelligence in any of this. All of the warnings are not original insights from AI, the AI personas said that because a bunch of humans said such things in the data it was trained on. This is like the AI agent in google that summarizes a response from the first 5 reddit posts you find linked below it.
youtube
AI Moral Status
2025-10-10T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxGQUmIpLeFlVIDfUt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgypjQqT3e-Zz3saSR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzAtvqbmQte7oFZz7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyveMc7u-7ne9DBptZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_Kf8CwqOhrrLwZnh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxfruevPX7EW-ohjCt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVUIvqOJ-FpeC_iV54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwr6eTh2pBwRoE1BxJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZhBD0hBQfAb5GL4p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynphV8hjpJa61EQld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]