Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What makes humans human is a human soul which is a spark of Divine Spirit, which…
ytc_UgzZFd3qd…
G
Holoreicht Punching Card (IBM) hunt Jews in II World War .
Ai has to be limited …
ytc_Ugy_RFMgr…
G
Shouldn't have been following too closely. A self driving car wouldn't be follow…
ytc_UgxPAmY7o…
G
Sure, Robotaxi is "brand new" at it, but why does Tesla tout its billions of mil…
ytc_UgzCqbaid…
G
If self driving cars become the norm, I would personally kill a motorcyclist ins…
ytc_UggUGDnRA…
G
This is crazy! Makes me think about how advanced AI is becoming. I've been using…
ytc_UgxzduZXw…
G
Theres no surviving with work in the long run. Ive been stuck working with stagn…
ytc_UgyutPy16…
G
Maybe the real answer is learn to code.
I know that sounds like a dismissive, m…
ytc_UgyuVzPKC…
Comment
48:11 “Predicting data that humans created often requires you to be smarter than the humans that created it” seems either disingenuous or wrong to me, given that we are asking of an AI that it produces an output that fits a range of responses that we seem as acceptable, performed in such a way where it specifically doesn’t need to understand any component parts or anything at all, it simply has to conjure something that sounds reasonable. Extrapolating theories from a set of data is difficult, but popular word selection from aggregates of other extrapolation is bullshitting, not extrapolating. Creating a prediction in this way doesn’t require that one understands anything at all, I’m reminded of the “Chinese translation box” thought experiment, where someone has a set of symbols and instructions to give certain symbols out of the box when they receive other symbols into it- in this way the box could seem to speak a language even when the thinker in the box knows nothing about it.
My point is- it’s not a sign of intelligence that it is “predicting human data” because that could be done without any of the processes that we would ascribe to intelligence or being smart.
youtube
AI Moral Status
2025-11-17T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzygqCafbRLsp9Xr194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugywgk6du9hbvl99LO94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyl3QgrWOTtl6hKe3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzPh9ySYWWVptvVjrF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyyxM9y89cm6W4WC954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy4E7InsIdi_3w7hNB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwyn9yX1AMEJtOc7114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9JSmCZTyTbp2N4NZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfrNfhl5S1I770on14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwmRXUeGPtQkWYsN-p4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]