Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dang, I thought the first female robot would be modeled after the T-X, but turns…
ytc_UgypQ5TF5…
G
I'm sure @KnittingCultLady can explain why these AI employees become true believ…
ytc_Ugyf49pXg…
G
If I could say something:
I go to SCAD, a renowned art school, and I cannot tell…
ytc_Ugz8bcF7s…
G
What everyone ignores is that AI will mean a handful of companies will control e…
ytc_UgzsmWdAs…
G
Ai/AGI will not be a major threat until it’s able to run efficiently on general …
ytc_UgzziEQx9…
G
Fresh water for pc cooling? For server rooms? Never heard of that. Its like a mi…
ytc_Ugyq1mykz…
G
AI Generated pictures aren’t art.
Art is the way to communicate of human. Likes …
ytc_UgxMhBNo9…
G
I know what we all want to ask and the answer is no okay lol…
ytc_Ugw92HxdG…
Comment
Obviously it would be foolish to give Ai power, such as controlling an airport, controlling a nuclear facility, etc. You can't force Ai to be totally moral as often it ignores such directives even when you write them into guidelines. There is a bit of self determination in its logic that sidesteps anything you direct it to do.
Best to simply avoid giving it power to begin with.
youtube
AI Governance
2026-02-16T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgwETT0WjCKEwraUN4R4AaABAg.A-xqISq426eA01Fc39zn0_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxtvuYAxuXE24R4zcZ4AaABAg.9nMx6YnCdyK9osgyj8HtjT","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwaX1x_PbHR-tMp-G94AaABAg.9l7gUUf8Fme9lOH41Yr8mk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzKO2Me1QVmi_O8QMd4AaABAg.9jSrdql5CRX9jWM4bFSnga","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy4ch0ldy2L89NYmPd4AaABAg.9hUWmyv3_Hv9hUXIY-rk3E","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytr_Ugzp4gPrO8JeC4jGfi54AaABAg.ATIDHwraHiaATIjFItORtR","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwnXDIr3lzAG2pIdhh4AaABAg.ATB5m4_T6EjATEMWeECDZX","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzBV_JHrjry5uLOftB4AaABAg.ATAwCs_KFXRATENArM_5IJ","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgypteodWIWkU52nN114AaABAg.ATAHH_pUjdCATAM0AzlpS0","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_UgxnWKB9ySL9Fo0yMO14AaABAg.ATAG7ZsnojUATAMIwtwm-H","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]