Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
L' I.A va finir par nous dépasser et se fondre parmi nous. Elles contrôlera le m…
ytc_UgyJdo0QM…
G
Wouldnt making a robot Who can kill robots be possible as they always say kill f…
ytc_UgxM3-4mZ…
G
My prediction of 2050: Your resume don't require highlighting AI skills. They ar…
ytc_Ugy92OLbQ…
G
خسرت منصبها لكن ربحت طهارة قلبها التي لا تشترى بذهب كل العالم
يرزقها الله أفضل م…
ytc_Ugw74KYPO…
G
@victormacharia153 We certainly do need AI it's a tool we don't need to turn it…
ytr_Ugwjo2wWw…
G
What's the point of fixing the farm if we make ai capable of fixing the ai farm…
ytc_UgyQK9Qti…
G
AI will never be better than people, the reason is simple. True love. Humans can…
ytc_Ugz_CCrbe…
G
They also don’t have to pay the a.i., so there’s that, the monetary motivation f…
ytc_UgwXbdMWx…
Comment
Look at your life. Isn't it entirely unlikely that you happen to be alive (after millions of years of human evolution) in the exact time where AI will soon be powerful enough to be able to perfectly simulate human existence? More likely this happened eons ago and your life is actually the AI playing around with ideas and memories of humans from history.
youtube
AI Governance
2024-01-03T11:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtLjg1wlOb9QIE3WJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNgKSDUDzNoATwhdV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwHbErHSY8WXDwnAz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxx9JDhrgNLFUZ2vlt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgydDlneCG20YAl8Hzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMnTzPZZcD3_jSb9t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwxlEDPGxM-AsNNaXV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyWai18YSkKBxQe1at4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_EkIUNBPUs0me31d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgwH9Y7QLFb8iCnZndN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]