Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
17:44 You can't just make a video against AI and then put this masterpiece in th…
ytc_UgwsVIbmz…
G
Nothing at all CREEPY about putting a person's "soul" into a robot body. That is…
ytr_Ugzc4GOJM…
G
My question to ChatGPT>>>>>>
If you were of the mind to do so, how would you as…
ytc_UgzYgT79M…
G
I think you’re underestimating it, or you’ve only barely used AI. I get your poi…
ytc_Ugxu-Gr_7…
G
I think talking to an ai is a lot more real than any god or spirit people care t…
rdc_mlihjxf
G
AI will have no problem implementing the code once we hit AGI. And we WILL hit A…
ytc_UgzTKYxyO…
G
The Seeds that grow your food that God gave to humanity are the original Ai. Fee…
ytc_UgwCfXbTi…
G
I’m not totally against ai but I’m not also with them. For me, ai could be used …
ytc_UgzfcScri…
Comment
Jack's "I don't know to put my finger on the existential risk argument and yet it receives so much attention" argument misses the point. Indeed, his (or my) ability to contextualize AI risk has nothing to do with the actual threats whatsoever. "Doubters" love to frame the discussion in familiar terms like nuclear bombs and pandemics, but those are by no means the only avenues open to ASI that wants to exert it's power.
youtube
AI Governance
2024-04-03T04:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwZ37SIU25J4uff10d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzSw3eI6eec7IUhaa54AaABAg","responsibility":"company","reasoning":"deontological","policy":"investigate","emotion":"fear"},{"id":"ytc_UgyQ_U6IpuB_hxTamQR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugw5AbTiMw0JR2LPEH54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxLArEOZkSvX3TCXHt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyXGNxKnLhHq4L-XkF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy71Vyv5EP-gB59Rr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwFPM4EhWrKVOv7ZA94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgzNhK6NLEUC8bxVJsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgwF2gbFZNbTpw_XhrN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]