Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Corsaka Yeah, its mentioned briefly here but chatGPT is a predictive text progr…
ytr_UgysDq8P9…
G
Yes, this. Humans will continue to develop - and to deploy - terrible weapons, a…
ytc_UgxebFuLq…
G
All the AI BS is simply a new F-ing toy, People will get board with it as all ch…
ytc_UgxewxEIM…
G
V12 of FSD (full self driving) is proving itself to be very capable. Tesla are l…
ytr_Ugx099Num…
G
Ai is already better strategically than humans. Why hasnt it taken ceo jobs? Oh,…
ytc_Ugzy0bIly…
G
I know people goon over this fucking UBI shit, but WHO IS GOING TO PAY IT??? You…
rdc_ogu4agm
G
All AI generative is useless unless you're looking for a recipe for cheesy mashe…
ytc_Ugyop3jEo…
G
Automation only serves the top 1% It does nothing for the remaining 99% why are …
ytc_UgwtX_Byi…
Comment
I saw no evidence of AI. All of the responses could have been preprogrammed. Their responses were rarely even appropriate to the discussion, and sometimes just wrong (copying). Not impressed. I could have done this with a puppet and a tape recorder. No where near intelligent.
youtube
AI Moral Status
2019-11-10T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxXg339nGK4Ig2Xu_J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxowXB8auIeAE2-EnN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzzAb1sItOr5kvOw1l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyoZXWT0yxjbJDrtxN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzs4Nr4q4nAtXn-R1h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyDk5HwrVjeMuOwKXV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx5glvnB8nukdO9ERh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQHJQPZrvuxzNDbc14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx13mxDUPagO7vWqOh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzRFph_isQl6unSkm94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]