Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
True, but it was obvious to anyone with a brain that image generation was going …
rdc_nzjusjy
G
Let's say you're a Youtuber or Twitch streamer with a respectable following who …
ytc_Ugygm3a1L…
G
Just think, by the time we're old enough to go to one of those filing cabinets t…
rdc_d8agsau
G
Why is it ethical to make AI smarter and unethical to make human smarter? Organi…
ytc_Ugyhb6gs8…
G
My ai did it just fine 🤷♂️ maybe use comfyui with correct checkpoints and lora…
ytc_UgzlFnAFI…
G
I almost expected to see two AI systems debate the future of humanity...... I'm …
ytc_UgzgttZNt…
G
Nowadays whenever I found some famous authoriative people giving any kind of spe…
ytc_UgzcMFjj6…
G
I CAN NOT believe people defend AI art. it is theift, you cant defend it…
ytc_Ugxe64cT0…
Comment
Watching this while watching the show Humans that is about synthetic humanoids that gain sentience and feel emotions similar to how we do though in what I would call a more logical than us possibly since they are robots and have a higher intelligence but are learning HOW to feel at the same time like a newborn almost. And really I would say personally I would be pro robotic rights but it seems silly to make a toaster of all things sentient when there could be a robot that handles all the kitchen appliances that itself is sentient. Unfortenatlly with that though is one fact "robots are created to serve a purpous" and if they decided they didn't want to do what they were designed for we would therefore have no reason to continue to construct them if they will not perform that purpous. So it is a tricky idea of how to keep robots and humans happy. I mean you don't want your toaster to not make your toast one day just because they don't feel like making you toast today.
youtube
AI Moral Status
2017-12-24T10:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwPnVTZLgeQd113hbN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxp4H2J2kugobdVj_h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxaMbXI8jh41YUDk6R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzVSF7g6eN5-sIa_ut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcxcNHIhyH--wHvIB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6C7qkHWjrNA7SoiF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzghnUtyB_joJSYCxN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxLerjrrR_conV1s214AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLioyfiEhCrl3OQrB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyggFjE5wPC50XZi0l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]