Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
History repeats itself. Garden of Eden (knowledge of good and evil). Regal "blac…
ytc_Ugzn0naB8…
G
I'm happy to see a conversation on Kant and Kant's ethics started here. A few th…
rdc_cwlmyr1
G
I think his whole raving and commitment to AI art is caused by issues with his b…
ytc_UgzvN2dp_…
G
Just because you think you can fly doesn't mean you're actually going to be able…
ytr_Ugy-LA-03…
G
@undecidedmiddleground5633what i see is a resource problem, not convinced it w…
ytr_UgxPHUyCz…
G
The main problem is jobs available for juniors is decreasing dramatically in the…
ytc_UgwFCPaL4…
G
I didn't agree to be a beta-test subject for "autonomous driving tests". This i…
ytc_Ugy4c9PUw…
G
I’ve watched the video, and some of ChatGPTs responses to him just sound like li…
rdc_nasjx3o
Comment
I hilariously love all these AI science warnings 😅😅😅 its like someone is holding an axe to their owm foot and raising their arms to swing and saying "This axe is gonna chop off my foot!!! This is dangerous!!" Lol ok Put the axe down and just stop writing code.
youtube
AI Moral Status
2025-12-20T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyp2plNs1aGlzKTYRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLzfsQu2KtdxM3dLd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9N1xoiFm_NMvfd2F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgymPMUO6vX6yNA9Zap4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxurOzvUbsGjqxJohJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw90-ZuWJ1hbaoLrlV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYqcpfE546HfkgHOB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwNb6NgsgtaHTG9RJZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwi9HhUfu7ounFwOgR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyN5-Ci_fMa_Hwv3NB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]