Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder whether p-doom, the probability that AI will somehow end humanity, is s…
ytc_UgxQp-Ck-…
G
Tous les ministres du gouvernement devraient être remplacés d'office par des IA.…
ytc_Ugwq1yYtc…
G
I don’t know I just would love to be like if I worked late and I’m super tired a…
ytc_UgwqF9ANy…
G
Japan is smart. Trump has betrayed allies that have been at peace with the USA f…
rdc_e2w0duu
G
😂 The atheist will say that it was found by coincidence, she has no creator…
ytc_Ugw3fkE1Q…
G
I am an economist and in academia I feel the same as you in art with AI. The pro…
ytc_UgxLGYUty…
G
they do not regulate such that altman can secure his revenue.
btw altman is a s…
ytr_UgyszuAEi…
G
@zip10031 i meant that im not that good like other actual artists but i love dra…
ytr_Ugw_AyhQO…
Comment
I find Wolfram to be more on topic than Yudkowsky is. Which is peculiar since lots of commentary here seems to disparage Wolfram. The host seems to be allowing Yudkowsky to run along but eventually even he tries to bring him to answer the question how AI is going to be an agent in the world. I don't think that Wolfram is denying risk to automation ( even extremely intelligent automation) but Yudkowskys non "sci-fi" view of doomsday seems inflated with conceptualizations that are stated and yes somehow undefined.
youtube
AI Governance
2024-11-13T03:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzZjk-dccsmE4r1CbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzN-dfsvH0_3hTj87Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyBrsbkOUjTW8bZHgt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2Qq17d-rNew-K7hJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzmT97vvYHntMl9Y5d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxJEWyj3-VMGPf5UR14AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw75_NQVGIiLn5jb9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwIcUBDH-ncdjtaAw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxkP3JTDL_ibbhpF8V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"frustration"},
{"id":"ytc_Ugz4NAtgI9yTWXsehN94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]