Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@lakotaspirit5789plus they get to keep their AI R&D, it’s not like they are star…
ytr_UgznCWFI6…
G
@ When I say that I use AI to make my ideas real.. never for the final product. …
ytr_Ugy_1wn-C…
G
My question is: Why do 99% of viewers of this video who comment, automatically a…
ytc_UgxzYq1cU…
G
We’ve never had to deal with something that is smarter than us….. last I checked…
ytc_UgyORtlXm…
G
People NEVER needed "a job." What people need is a system of government and econ…
ytc_Ugxq1Cpjq…
G
China 100% will regulate it even more than the U.S. China is a authoritarian cou…
ytr_Ugwk5-sDt…
G
We used to write programs on punch cards also, we do not do that any longer. I h…
ytr_UgzEGQcBN…
G
bro literally said it himself, it is a tool, one of many you have in your lil to…
ytc_Ugzpa2xSg…
Comment
Everyone always talks about P-doom and ai killing us all. But what does that even look like? What does that entail? Is it going to launch all the nukes for fun? Is it going to develop a disease? Where is it going to physically build that disease? I fail to see how software could physically hurt a person.
I could see it just turning off the internet or the powergrid, and that would be disastrous, but that wouldn't make humanity go extinct.
youtube
AI Moral Status
2025-10-31T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy5DeOvtkWB96avAPN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzTP_K7K6PNvnS0XWJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwa4tbwGI6A2Ek6cXJ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhW0qZHE7ccaOF7094AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzHTFsHCTpxNZIn7Gp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugwqp6yhsxHPjV8WKxJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwqkBNBWPgM6_Qb_BN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyYY5lSWxRYueZJmnN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlP4xQ8pMwWadq84l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcsIlQtEDndkXABqp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]