Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When I was a resident I spent one week reading mammograms. I can honestly say t…
rdc_fcsrkoo
G
@planC-b4y maybe you just don't realize the difference you've made you can't jus…
ytr_UgzbhNwLD…
G
As a creative myself I feel like what most other creatives & art lovers fail to …
ytc_UgxSx7Twu…
G
I'm sure it's private. It'll be another 2000 years before you'll see this implem…
ytc_Ugxk-gU1S…
G
AI's memory is probably really good... oh its really good.. yeah... really good.…
ytc_UgzofjLPx…
G
I don't know about it affecting millions, but it will definitely affect thousand…
rdc_muj9om6
G
@therealnumber0nefoxfanI graduated from a useless degree in Animation just mon…
ytr_Ugz2a4bxz…
G
I think robots working for us is more of what they wouldn't want. How about work…
ytc_UgwlEs8e_…
Comment
It was a nice conversation, but unfortunately they forgot the elephant in the room: how do we align the AI to our needs as we ourselves don't know what we want? For example, AI must do no harm but we also want to use it in war.. The problem of giving contradictory goals to a logical system was already discussed in movies like Kubricks 2001 and after 60 years now I still don't see, that anyone has a solution.
youtube
AI Governance
2026-03-25T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzCWtIk6tt-KUIVgoh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy1mJft5KZo2N7vU7R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzSViSr65liaEYkxHl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwYP4toxS1hS23zeUZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQF0p98JwuTLpQaA54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxnNI1M6f27-ss5yMF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxIyxkS0OCTzS_-aUZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyE76D3zx3X1kULp2h4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz4jQBOIBsXS44-kC54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxrAICvJz2Ct5He0uF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]