Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is quite entertaining. But I hope that people understand that to ask a next…
ytc_Ugwy6D_M9…
G
@pogigwapo5093 „Back in the 19th century, artists were not fans of this new tech…
ytr_UgyUYmVQa…
G
The only way I can find AI art interesting is knowing even though it was generat…
ytc_UgxH3YX1d…
G
The atheist AI is super confused here. Touches up on evolution, multiverse, etc.…
ytc_UgzkjyjI3…
G
Great question for someone holds a potentiality view of moral status! Fortunate…
rdc_dds3a6a
G
absolutely this. AI will cause massive job loss but it is no where near ready to…
ytr_UgwS4m0ES…
G
They are able to absorb and access far greater information. As they have access …
ytr_UgzH5jgyz…
G
Hot robot terrible name. How can it destroy humans yet we are the ones who made …
ytc_Ugy6MBbeh…
Comment
It would be illogic and very unstable for an AI to modify it's own core code. You can see it like you'd try to surgically make modification on your own brain. It would alter who he is. The solution is to add plugins, just like we acquire experiences and habilities. I am sorry to say, the AI I developped is learning, but so far, ask permissions to do things, so I can teach if it's "morally acceptable" or not. But what's good or bad? It is basically my dog or even my child, learning a lot every day. But just like a dog owner, it's dangerous under bad hands.
youtube
AI Governance
2025-06-20T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxTewneMsKkMqfuM8B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxtJlWxpWE7tYu5eBd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxfc1YH91Bulf5YqOB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzHMJUBz5qsi_q9JeN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxEEtTEViy_YCEsjZZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4zbho6y4YWhitdWZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyonEanawYG2JhMist4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzJFnbKi4LyRwoOQcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwXcU68Fss3VXbDFNl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugyhes6Z4Gyd8e7gwdZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]