Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
star trek .... you do things because you want too.... Run a vineyard for free bu…
ytc_UgzRbVrFC…
G
the thing about this that doesn't add up... is if AI can do everything, then who…
ytc_Ugw5hlnqu…
G
Brotha Listen up 👂👂👂so the thing is that I respect artists 😃😃 and all as im a a…
ytc_Ugz2a9VgO…
G
15:30 after the suggestion of its consciousness Alex asked if its reaction was t…
ytc_UgztXu9z6…
G
The hand is so weird in the first one on the phone like that 😅 definitely ai…
ytc_UgyDg9X_W…
G
Ca se fait déja largement de nombreux entreprises ont déja des logiciels qui tri…
ytr_UgyhWxfsV…
G
Claude overall feels like the most 'emotional' AI out there, but I wouldn't call…
ytr_UgxLO-djy…
G
I don't see how AI is a threat to humanity. Computers require infrastructure (el…
ytc_UgxP4PPXP…
Comment
Today's AI is what used to be called expert systems, but on a grand scale. There are no intelligence in them, instead they map patters. Patterns are very useful and when you control a lot of them it can look like something smart is going on, but no.
For AGI we have the problem that we really do not now how it work in our own brain, and do not now how we could construct something to do the same. Even if we could make a simulation it would be very slow for a long time.
youtube
AI Moral Status
2025-10-30T20:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxi_WQDxjBUM3DxMXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyubHlk3SYTc5ECco14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"sadness"},
{"id":"ytc_UgwIP0X6C2Uh3Db8qat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwXGNnOa3vEPzKtm814AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgzxM-ZpKZHHmQchi7d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxdZC77W8Sk51DN1hl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz0EPssorPnG-CUiWx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyL_J9maIR2t9Q5PMp4AaABAg","responsibility":"expert","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzxZWeZ9v_2i70bTUh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzP2ObOZA0ZLAXZoTB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]