Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I get all my ideas from myself and i get others or ai to pick a character name o…
ytc_UgwL84Rpe…
G
People who generate ai art who didnt even say they are a artist and just want to…
ytc_Ugwr1keTx…
G
Its funny how ai development is more important then the development of the human…
ytc_UgzKgIWLO…
G
1:32:51 ok, but if you keep giving the lower rung tasks to an AI instead of a hu…
ytc_UgzQ6Fizk…
G
Ladies and Gentlemen the AI creators have confirmed exactly what we can very con…
ytc_Ugzsrr8Z6…
G
horse -> solves (among other things) transportation
car -> solves transportation…
ytc_UgyKjY4US…
G
If farms replace people with tractors, there will be factories needing people to…
ytc_Ugwc_F6-D…
G
I have no doubt A.I. will turn our world will turn into a form of Skynet. And wh…
ytc_UgyHzjUHE…
Comment
@AndyHeynderickx For me, Roman exhibited an attitude of pessimism on the back of a few general and vague premises, but made little in the way of meaningful arguments -- at least toward any interesting, novel or existentially-concerning point. Not, at least, in this interview. I agree that AI poses risks -- perhaps even existential risks -- to humanity. Sime trivial ones were mentioned by Roman (such as "i-risk"). However, the popular conceptual picture of AGI "turning against" humanity (implied throughout the interview) requires many steps of logic no matter which hypothetical causal path you run down, among those steps (very frequently and specially here in Roman's case) that you somehow get artificial pernicious intentionality from an evolution of artificial intelligence. This is a typically overlooked step in the logic and it was again missed here. It's hardly obvious that any amount of build up intelligence in AI/AGI will lead to the emergence of artificial intentionality. What seems more likely is that that this will not happen. Intelligence, intentionality, consciousness are often conflated. Roman doesn't seem to consider any of these nuances at all. And the many more associated nduances.
youtube
2024-09-16T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugzrd39tEyjjXSkYrPl4AaABAg.AD59lECMKJtAD5AK5yeFQt","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugw_oUTPvkZvUAZMXTZ4AaABAg.A9fxu00uBWzA9fyIl5WhNs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugy7qB6VjNJiwEdJP1h4AaABAg.A8jHrAKTHp1A9wmlpSsNII","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugwpody1QhIuO8bFVE94AaABAg.A4f7IQxEqQoA8QUsZb7BNG","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugwpody1QhIuO8bFVE94AaABAg.A4f7IQxEqQoA8R1E0D7qbR","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxPQTWkwALaEzc6GuZ4AaABAg.A4bcrwKd-8ZA4k5qWB_Ceq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxPQTWkwALaEzc6GuZ4AaABAg.A4bcrwKd-8ZA4lTlCFcS_P","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwHgKmDvhYLMl7-Dnp4AaABAg.A4TsNZ1PLy2A4TxelVLyJs","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytr_Ugxj2bYDcZz5H-KZP2d4AaABAg.A4TnkaIE9F4A5P_XAoclgl","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugxj2bYDcZz5H-KZP2d4AaABAg.A4TnkaIE9F4A60JrAS0VT9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]