Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@pattik_this is just another AI Bad video made by someone that will be replaced…
ytr_UgzdkgYQW…
G
He sounds like the AI he’s afraid of: extremely intelligent, yet completely off …
ytc_UgzwA_6bV…
G
ABSOLUTELY TRUE... CHINESE ARE TRANNING A ROBOT NOT STUDENTS. AND AMERICANS ARE …
ytc_UgxE4FGZo…
G
We have to thank this mother for coming forward. This is absolutely heartbreaki…
ytc_UgwFi8eyh…
G
@aiwkua "we human do this as well"; No that's wrong. Emotions come out of the bo…
ytr_Ugw_XHoOf…
G
Congratulations, sis! I’m from Kzoo & daddy was a professor. He supported b…
rdc_mr1ouj5
G
one thing is for sure. The more humans document and discus the AI problem and ho…
ytc_UgzZrdF86…
G
There is a better way to handle the AI and that is understanding it first. Bette…
ytc_UgyV8Nw3q…
Comment
I agree with scientist and technologists that the worst scenario would not happen that way, plausible, but unlikely. In any event, AI would not be human, which it is a human trait to commit genocide. Also for AI to duplicate itself and send to space is unlikely. I believe there would still be the need to have a biological element, in which I believe there would be some merger between AI and human. It would ensure sustainability for both.
youtube
AI Governance
2025-08-03T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugw8bo98hbm2kDYhxR14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyM9WSB6qtyZa-QI0R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwoP7V8ctjiaGvPawF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2Gc7yuNbtqZV4d-p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzea5k0n-h9ZAwcihZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzVn5uSBg9vENreBSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyt_5Fdn3CPpePutMB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyC35tQOQ_30F_m4IJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjrkFx9LKpx0sAWSB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxWU1_DDHaF378hqGB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]