Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is a fallacy in this logic. If a businessman is introducing AI in his comp…
ytc_UgyYELmcc…
G
AI is an existential threat to humanity because:
1. We don't know how it works a…
ytc_UgxGSwLIv…
G
My take?
AI art *IS* real art, but it's not *AUTHENTIC, HUMAN* art, so they're …
ytc_UgzuW8zsH…
G
I’m a hater of the ticking sound at the beginning of the video. I just almost co…
ytc_Ugy00cdd2…
G
I use AI just for research since my work involves a lot of reading. Even that is…
ytc_UgyiOFc_t…
G
when a image shows up on google search 90% of the time is free use so you dont n…
ytc_Ugxg30hw9…
G
we should be giving Ai jobs nobody wants and stop messing with peoples hobbies a…
ytc_UgydtxBS1…
G
To me conscious AI prediction seems like 60s Robotic prediction after 80 years w…
rdc_ioeqetc
Comment
Oh yes they are crazy enough. It's not just money. These people are ideologues. They want to build superintelligence, and they are willing to take massive risks to do so.
I wish he wouldn't have quoted the CEOs so much, because the truth is that the technical leads at those AI labs believe the same things, as do thousands of outside experts, including the most cited living scientist, AI godfather Yoshua Bengio.
Half of all published AI researchers say that the risk of human extinction from AI is at least 10%. ("Thousands of AI Authors on the Future of AI") The adjacent field of AI Safety has been talking about this for 20 years, and only recently did the field of AI as a whole start to recognize that they were right.
youtube
AI Governance
2025-08-28T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugx5YCRGCoCkjdOM2m14AaABAg.AMId3fhlf7CAMO6veh1ih0","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytr_Ugx62XveXjdXxjqCsVp4AaABAg.AMIcmnCICcDAMMdvHjlh23","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwkpDcIxweX1zW-J2h4AaABAg.AMIbIuqtBGuAMIdcDqIvkS","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxPJj4BpTqnkrE_nO54AaABAg.AMIZAzLJO7LAMN_UFOapNE","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIYUakQJOq","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIqA8QeEHG","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIxtkGAAk4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIyqskZDn4","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMKRlvcv37B","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwOGSW5jAljACTEphh4AaABAg.AMIUm4ROxlIAMK7iytDiE1","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]