Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@alexgamble4718 thats the stupidest thing to do then. it will achieve nothing…
ytr_UgwDVn0ZW…
G
We understand that interacting with AI can sometimes feel a bit eerie. If you're…
ytr_Ugwl93YnU…
G
Telling AI to create something artistic that you can’t physically create because…
ytc_UgxRUrNpZ…
G
I think that robot was groping her from behind 😂😂
Robots really have started tak…
ytc_UgwK4TFX_…
G
People need to calm their shit over AI. Intelligence is not something to fear, …
ytc_Ugi-b3JxH…
G
You are very misinformed. I can program 3-5x faster and of higher quality using …
ytr_UgyRhcypV…
G
"Nightshade is abuse!"
Yeah, how dare those artists abuse AI by not letting it …
ytc_UgwTpsasi…
G
The only way AI takes your job is if a human teaches tje AI to do your job,or yo…
ytc_Ugxk4DjdH…
Comment
I'm sorry, Hank, you're really anthropomorphizing AI models here. Computers aren't going to colonize the world because they don't have desire. They don't experience want. They don't have biological processes that drive us reproduce or ensure our own survival. The real issue here is access. Given enough access, the intelligence model would mirror the perceived goals of the input data, which could potentially be harmful to whatever system they have access to, but there is nothing generative about AI. It will never have its own original thoughts or desires. These are distinctly organic properties
youtube
AI Moral Status
2025-11-21T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyBnx5CxIB82ljFO1N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwW03VC3ed2y9oK7gZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwn25wAFkJnqENhYT54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwC6zBKJni2YOF4I1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpPkkyw0y8NgioAdN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx1mxNKiB8GX_mGHEp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxhGOWDsh16fzeUiC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxnaKHF1_ABsdOJPcZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzyB0hHhiI8cUwx-hV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxUHgo7TyISEkd9SNJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]