Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@seanmalloy7249 Then why build it in anyways, lol.
Also, seems like it would sa…
ytr_UgzEjWqC_…
G
That wouldn't be a problem because our constitution does not mention ai at all s…
ytr_UgwPqJYs_…
G
You fricking nerds need to stop trying to make AM with these A.I. tools. DID YOU…
ytc_UgyktcYJ9…
G
Ai creator mysteriously accidently got run over by a bus & the bus accidently re…
ytc_UgyppLZJ5…
G
Anyone who thinks doctors will be replaced by AI before their own profession is …
rdc_jw8k41r
G
I think this is so sad because you would think with all the arnold schwartznegge…
ytc_Ugz4G8aOq…
G
It "learns" but it does not apply. AI art takes different art and uses what's mo…
ytr_UgxNMXzhu…
G
I loved the Rick and Morty reference at the beginning. The butter robot Rick bui…
ytc_Ughf3kv_0…
Comment
Safety can be programmed in. But an intelligent force can always go against its programming. So safety in AI will require constant vigilance. Humans are lousy at constant vigilance. So we will need AI that is divided like the branches of government, and balances of power. Some AI will have to be constantly monitoring other AI seeking to keep each from getting too much power. Even this is not hopeful. Our own system of balance of power has become so corrupted that there is little balance left. In a super-intelligent AI, that corruption will come even faster, through secret negotiations among AI, than in our own. Secretly plotting behind our backs and deceiving us all the way.
youtube
AI Governance
2025-06-16T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwQ8eSwBsC_CtVA9H94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUOEkZlek8P1GptZd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYKPhC4bIVezzek3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwPRrLfbYjU65rkC2h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9rZxdbM76lfihsht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8uTBXqsg_MjAv3h54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzIhNVeP1DlCY4-L014AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwDQ-MhaxCh4OO07v54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyE-6M-aIdYY6WnSaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwxVe07_-a_RVhK_QN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]