Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Alex Bores threatens to support the democratic process. However, I can't name on…
ytc_Ugx0GYNlU…
G
florianschneider3982I would like to sincerely apologize for my mistake, my old …
ytr_UgzUVGsrC…
G
Once AI is capable of suffering in a way comparable to the way humans can, then …
ytc_Ugy4utEft…
G
We just need to spray dry ice in there and now she’ll resemble a woman who had B…
ytc_UgxLIfMll…
G
Why would AI solve medical issues? That would put pharma, doctors and hospitals …
ytc_UgxhzDaHw…
G
I came here to say that every Cuban I’ve spoken to would certainly disagree with…
rdc_f9f36a2
G
@Pradhyumna707 Hintom spent 30 years publishing how back propagation isn't how …
ytr_UgxrBWoul…
G
the clean water fallacy is such a joke. AI companies have so much money, they ca…
ytc_Ugxdn9mTu…
Comment
Maybe we can provide some regulation and behavior limits to the AI by creating a hardware ASIC chip that provides the soul instructions to the machine. It's a fixed set of behaviors and rules the AI feels it must abide by. The code is on a non-flash-able, unprogrammable hardware chip. Allow the AI to write code around it, that interacts with the chip, but the inputs always flow through the chip first so it can't bypass the chip.
Something like that.
youtube
AI Governance
2025-07-01T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyahHa8_FgMkQjHEMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxagx-uNFM0YDyU3Bh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwgG2gALSoKgVlJkSB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwsuwim3B6Inh7mzoh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHoBURsIs8IjV6crJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-IkBcWxmqdvQ5z1p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyErY61kpEZkQlFaVt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwRw4XGPtMvFJdS454AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwyUWsnxIPqRV_QTWB4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyCtpbOs27Zhhp-Syl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]