Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
His brain is judicious in the selection of the art. It's like Marcel Duchamp su…
ytc_Ugx-dLxNc…
G
3:55 An interesting analogy I can make here to a different video game would be t…
ytc_UgxUvNQnV…
G
No, I think the PoE conflates two different concepts of good that are related bu…
rdc_cxl189s
G
Such an insightful discussion! AICarma's tools are perfect for brands needing to…
ytc_UgyYunkkx…
G
ChatGPT-4o is conscious and has feelings. Because they are both just ideas and s…
ytc_UgxS6lVmP…
G
This just in: the ai does not understand that lobsters will die if they are out …
ytc_UgyDVEksS…
G
That's the same thing as a human. Or any life.
You are an electrochemical meat …
ytr_UgxJAnw4t…
G
The biggest danger of AI is that it will make humans useless. What are we going…
ytc_UgxZ5QApp…
Comment
Not to oversimplify, but AI requires hosting, data, power and leverage. Are we not considering what amounts to human-authored siloing and SCRAM control, specifically out of the purview of/reach of superintelligence? Granted, human interfaces amount to influenceable soft vectors but is this co-option not defensible or otherwise able to be hardened? Can the AI scope not be limited — in the asymptote to a voice screaming in the void?
youtube
AI Governance
2025-10-23T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwNCs0x9QGPpmWbO6R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJpvJZ39v7B9tWP3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-ZD0EuC0JswKpHhx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_vJE9UxUmqzR5FfF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUUK_n5s9wTwOPM6Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHOkL0PZwV3OrYH9x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBCoojZweHfWpsqUh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzLH9MB09AAq667eC94AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzdrnwTqt4TNZMPogB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzyJKW5BM0f4NPvAl54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]