Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t get why artist hate AI art! Like if enjoy drawing/painting just keep doi…
ytc_UgxwSgOCI…
G
Let's not forget that we still have power outages All around the US and AI can n…
ytc_Ugz9FTaiI…
G
Artificial Super Intelligence is several years away per Ai experts. This is why…
ytc_UgyKfVtTS…
G
human extinction (at least as of now and at least in the next 50-100 years) will…
ytc_Ugyo_qx_-…
G
I wonder what would happen if a human was driving in that scenario. It’s dark, c…
ytc_UgwGwM90w…
G
Why are people here being so obtuse on total emissions rather than per capita?
…
rdc_gtfjiqt
G
I think the ai is already smart enough to screw humanity i think they are just w…
ytc_UgxQmJ24v…
G
Geoffrey Hinton is nauseating to watch. The man spends his entire life building …
ytc_UgzIp7G9x…
Comment
I think the real risk isn’t AI “taking over” or reaching true AGI, we don’t have anything close to the data of the whole universe for that. The bigger danger is AI in the hands of powerful people who use it to gain more power and control, slowly damaging the societies we’ve worked so hard to build. This is what we have been seeing for the last 5 - 6 years, the start of this leverage.
youtube
AI Governance
2026-01-22T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwa-BrHzpu5zdTne854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwlQdUqumdaalvbXMN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyVssxMfI-MHOqhiJt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxqTM0pnb0ULjZsOX94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzqr2pwWNsNSjNJsVB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxBSmqoV2upxHqxE8Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzbOJIm4rEL_fH9xkF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxh7-fqwj-1hqK_QSh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzED8e1YDoKwoHrNlZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxXPlRdMHxPPIqFXil4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"}
]