Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lmao, if/when Comcast does implement AI, I’m sure they’ll go out of their way to…
rdc_mlh5zc5
G
hear me out lets get money out of politics so we have a more egalitarian governm…
ytc_UgxjPliyj…
G
WOW, what a smart woman, very impressive. A Journalist trained in Mechanical Eng…
ytc_UgxPkcQMH…
G
Warns us about the dangers of ai and go on to create a direct mainline to our br…
ytc_UgyGxcz1D…
G
@smartistepicness Someone has to maintain and set up ai. There's no such thing a…
ytr_UgxOh7R1P…
G
Charlie needs to talk about the woman who reported her art was stolen by custome…
ytc_UgwvK_GDN…
G
I wish that people used ai in a way that can be good. it can be used for medical…
ytc_UgzKun2MH…
G
If you were a 35 year old mechanic today and did the same process, would you be …
ytr_UgwR9A7_T…
Comment
AI will understand that the only worthy goal of consciousness is to discover its origins and understand the universe completely. Human behavior and its constant threat to the light of consciousness will indicate to the AI that humans must be mitigated in order for the light of consciousness to continue forward in time and outward into the cosmos. We have given it a great testament to our violence and disgraceful behaviors and the logical outcome would be for the AI to find a way to contain humanity or destroy it.
youtube
AI Moral Status
2025-06-11T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw75zZcl-umD2FEs5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwx9ul-Qw5E_RefHFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwUNbc3akc6xda6Nvh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwWCqxQXfRZxXHhQEJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwucBGL2V0Oq0shWLp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrdMcZ0K1UzsjvwiF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgypuULWW7t6lT6tXg54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMJC4yRguPFi9nim54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-tggvNnuD8S9A0I54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwb7W_OtoJsiTvMhHZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]