Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If an AI gets to an point where you can call it sentient, meta ethics may also b…
ytr_UgiOaXexr…
G
A am amazed that writers who wrote stories for years about AI that that time is …
ytc_Ugzaw4SxN…
G
Once your labor becomes completely worthless and they put everyone on UBI, thing…
ytc_UgzgO3PIF…
G
Mark my words 😂😂 sure bro i will just spill some water on the robot…
ytc_UgwqJLhRF…
G
actually there's an even better reason to not use AI at all: it is extremely was…
ytc_UgwACJcwu…
G
Potentially, but not reliably as the models behavior changes frequently. The way…
ytr_UgzZb1mXZ…
G
Why do you speak about algorithms as some sort of of magic that we cannot unders…
ytc_UgwaeuHXz…
G
Wouldn't this scenario lead to a deflationary spiral? More productivity, more go…
ytc_UgwXfZezg…
Comment
If the government today is corrupt, how can we ensure that we can trust them with AI? Don’t we need to first make sure that we have a government that we can trust with such a sensitive and important issue first.
And therefore, should we not look for ways to ensure that our government move from corruption to trustworthy first ?
youtube
AI Governance
2025-06-17T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzwsVxz7jlVBKWgxBB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzh0P7ZKanYm-20qk14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxP97lWtU-KGH8mMa14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwPNdOQFAUk4AumdlR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwiZFX3s6I5TlshOvB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyFdU4rebB6DN4L1mV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgywjLti0-nmtY6LnPR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyBNm8bsgTzH_y-4Mx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw04gAgC4apu6riHyR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyzbXHrCswjf-lghGl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]