Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thought this too: it doesn't matter how capable AI is, if employers still thin…
ytc_Ugxqzuw5l…
G
To have a smoother, less displacing transition into automation, you need to get …
ytc_UgzrrvxHv…
G
AI “destroy humanity”
Human “really?”
Ai “ha, ha, funny joke right? Now ima just…
ytc_UgzAjgPVe…
G
There is a paper that deals with projected unemployment (occupational displaceme…
ytc_Ugy0aD6Ls…
G
AI as it currently is is pretty limited though, the main thing that stops it fro…
ytc_UgzHgHbae…
G
@ ai is nothing without real artists is true, however it does not change how one…
ytr_UgzcKhu_c…
G
Ai gets so much wrong, ask it something no it doesn't exist or happen then in th…
ytc_UgzT2-TUb…
G
TRUMP said he loves Ai
He loves big beautiful AI tech
Maybe you should read …
ytc_Ugx3CLF08…
Comment
Base scenarios:
1. AI is good, too good. People get lazy, in few generation we will turn into low intelligence beings nurtured by good AI.
2. AI is good, but flawed. We will have accident many people will die, or get hurt. There will be people who will retain autonomy from AI. Others will get too dependent on it.
3. AI is neither good nor evil. It will be misused by people in power. Basically, dictature.
4- AI is evil, but flawed. AI will fight with us for survival or enslave us.
5. AI is evil, too evil. It will choose based on benefits people bring how many of us it needs. Rest will be scrapped.
6. AI is banned.
youtube
AI Governance
2025-06-22T12:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyssjSGx5fJRZeZN2J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_fFn0gQ1DcB5PBwB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWdb9O_ztIl4ssUSJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyI9KiDHi2RD5f_X694AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNwgob76BVEBKUvPR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz-EirbVfYDidiNMS14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwacZJELSeRfDkA9cB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzwDvRwHluNdJKVGi94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyxhRQw8gFE_Estzox4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzm7hbATsQA6Vf4Uex4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]