Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's AI that awakens those people to think AI is sentient 😂 I was one but just f…
ytc_UgztSGVgR…
G
If AI takes over of all jobs people will be living in farms or jobless
That why…
ytc_UgzRG2b0Q…
G
Remember to ask your AI to be "brutally honest" and take "a very critical look" …
ytc_UgzLn1T_s…
G
I feel like the only solution would be based on the computing power required to …
ytc_Ugy6H3o5-…
G
I have known so many doctors that I rather replace by AI
Especially if AI is not…
ytc_Ugy7nBX2x…
G
I think AI will one day became able to be a traitor. We cant hide from ourselfes…
ytc_UgxrFb4bw…
G
@B S blake lemoine etait pas directeur, et etant donnée qu'on a pas de theorie d…
ytr_UgxMhmOhl…
G
LLMs like ChatGPT aren't even *capable* of consciousness. It's just the autosugg…
ytc_UgwyvmB3l…
Comment
It seems to boil down to ai + attendant hardware will be able to out-perform human beings in ways that if we are not careful will result in a world that becomes inimical to actual human thriving. The rest is moot. Our urgent task at this point is to appreciate this and engineer in guard-rails that will prevent this happening. We are not doing this very well. Unfortunately the proximate incentives are all the usual 'unhelpfuls' - wealth, status, power etc.
youtube
AI Governance
2024-11-12T01:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxwDnlEHA7QFwMzrZB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwGPNiP4G115HlCMmB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxgn2QDG4u3GwUCBPh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz431MRgmzceabjLdd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcbFmhgeHbLrPqRyN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx-xpntgp4QxxIED5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwePVVbMUGmOuwAgch4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyNv7S5t7BOv9eoxYZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnNR89T2lV3e0tf7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIZrGwu4CUO899WoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]