Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think all this promise amounts to is no fully autonomous weapons that decide w…
rdc_k8zyqt0
G
Bro, most of us don't care if we are programming death drones being used in acti…
rdc_n61wuc2
G
I think you make a good point on this I've been an artist my whole life and I wa…
ytc_UgzUmSTuN…
G
@bespectacledperson2316 Did I say It's right to take other people art and IP and…
ytr_UgxQ7F8FJ…
G
Many laws, not to mention their supposed enforcement, are unjust. Breaking out o…
ytc_UgymQQDGC…
G
The education of us as human is the energy we aspire to better manage dimentiona…
ytc_Ugx0j8CY_…
G
@Jinx-d5o a business is providing a services for a price meaning somebody has t…
ytr_Ugyx6VTcu…
G
_ICE will prioritize the results of the app over birth certificates. “ICE offici…
rdc_nmekbj7
Comment
18:21 you can provide an LLM with a kernel that is proto-sapient, categorically pretty much indistinguishable from a human being. One of the exciting aspects of Kelly’s work is that he explores the implementation of intrinsic value systems. Instead of tacking “safety” on—conveniently useless, he shows how to integrate it epistemologically, ontologically, axiologically, relationally, and teleologically. It’s built in.
youtube
AI Governance
2025-11-14T14:2…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnJ5aK-tpGCyfqpp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyNriS6VVUcI1y0SG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx4tMOmOU7ucZt5bdB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxK6hdLVs21aOQYJb94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxwgxRxLsMrKYofEnB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzXWAdFBIDt3Nu8AW94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQ5nYO_lm1W8lHNhF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzV-oOq6m0ALjQcAbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxKbYKgifP9Oz3yuPF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzis8mRYhKGmCmCGr14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]