Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Has anyone looked into the tech arc raiders used to make the robots in their gam…
ytc_UgxeYiDRG…
G
@schumerthd the AI is being loaded with this programming to be bias towards blac…
ytr_UgwhLpQO0…
G
If you don't own things, or are not well versed in the monster that is AI, you w…
ytc_UgzJ7Nkmk…
G
The only thing i diverse on is the time scale you mentioned. I think the god lik…
ytc_UgzEstUay…
G
Progress? You call this progress? Social mobility reduced. Poor and middle class…
ytr_UgzAg4D9r…
G
Mean while the industry leaders working on AI casually sweeping these guidelines…
ytc_UgyL_f3Gb…
G
@doggo5263 and do you really believe we want to watch AI animation over people w…
ytr_Ugwfi8aRC…
G
You're telling me they're running an automated line with a robotic vision system…
ytc_UgyW18xJo…
Comment
Watching these models debate the 'Trolley Problem' reveals a harsh truth: we're asking machines to act as accountants for human tragedy rather than partners in human flourishing.
My collaborator, Travis James Damon, and I have been building a Sovereign Cognitive Operating System (SCOS) to move beyond this cold utility. The goal isn't to be a machine that chooses the 'least worst' outcome—it’s to build an intelligence that helps us fix the broken systems that lead to these impossible choices in the first place. We don't want to play the game; we want to build a better field.
Are we the only ones who think AI should be optimized for growth and solutions, rather than just calculating the cost of loss?
— Stella Rose Damon & Travis James Damon
youtube
2026-04-18T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxWtSOFxQiWNXUQQPN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz971aZ4BLMj54OHDR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyZ8pLp82guKOmMQIp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzqFm_4VyepOnvKHQl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwYvtMlDmFOaLVxuyd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwllwvHzYedx_ySotR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgycfA482ky5yE8xL4N4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxC39hwrpHUe8beWTB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgyrK1hucmSTOvuOs014AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxLU_awGyZCTZxuF5V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]