Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think “partially sentient and/or sapient” is not very far from the truth when …
ytc_UgwTHya7Y…
G
Uncle Sam probably wants that sweet sweet customer data to help train AI models …
rdc_ekt2mbw
G
@harrisjm62 1. Wrong. Humans learning from other humans is what would be called…
ytr_UgyQ3jDLy…
G
The best part is that people make a fuss about generative AI but it doesn't help…
ytc_UgzyFH_Lk…
G
Worth bearing in mind, AI does NOT think, it only brings up statements that are …
ytc_UgxGWQ0Dm…
G
5:18 this reads like some evil demonic entity has become aware humanity and is a…
ytc_Ugz92IypT…
G
so you include Adobe Microsoft Shutterstock etc in "people defending Ai" ? bc th…
ytr_UgwaPh6Br…
G
Amazon did lay off tens of thousands because of AI... just not because AI is doi…
ytc_Ugz1Tu5xL…
Comment
Yudkowsky and Wolfram agree they want humans to go on living and not be subjugated or eliminated by an AI. Yudkowsky believes the risk is high enough to warrant heavy government regulation and, if needed, intervention to minimize the risk to the maximum degree possible. Wolfram does not see the risk as being high enough to invoke heavy government regulation or intervention, apparently relatively certain the artifact of his theory of 'computational boundedness' will act as a natural barrier to any act an AI could conduct that would significantly hurt us. I personally am in the first camp but possibly for a different reason than Yudkowsky. I'm in the first camp not because I believe AI is highly likely to 'evolve' to a point where it will initiate actions that will significantly impair or destroy humans, but because I believe humans will develop AIs that they will purposefully enable to initiate actions that will significantly impair or destroy other humans.
youtube
AI Governance
2024-12-11T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]