Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The important issue is safety. I don't think route finding is an issue long term…
ytc_UgxLWCy4p…
G
This chatGPT discussion was also done the other way around... and suddenly chatG…
ytc_UgyuETEKQ…
G
I think the only way for artists to keep up with and use AI as a proper tool is …
ytc_UgxoUh5EB…
G
There is only one real reality in this….and one question….how long have we got b…
ytc_UgzdAAzwh…
G
A lot of this framing treats AI as an independent intelligence, but what we’re a…
ytc_Ugz5AYMrn…
G
what if i use ai for serial killer and mafia role plays because theres nothing i…
ytc_UgyNt1IJj…
G
Obvious dilemma about controlling AI and thinking outside the human mental box, …
ytc_UgzhfMP98…
G
Look at the history of humanity....
Are humans safe for other humans? No
Train …
ytc_UgwXyWQ1f…
Comment
This man is not very knowledgeable.
He brags about having dealt with risks of AI 15 years ago while science fictions did that 50 years ago.
He's talking about 99% unemployment, which is obviously completely unrealistic. AI cannot replace EVERY human envolvement.
And what he says about mathematics and AI: of course it gets better and better in mathematics and I am not afraid of that at all. Computers take over work that wo take to long for humans. Just look at FEM, computers do calculations and simulations that would not be possible without them. Or would take far too long.
Sure there are risks and many people will be unemployed. But this guy themes elisabeth over doing this thing, sure he wants to make money I can understand that.
youtube
AI Governance
2026-04-11T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwvASJ_Vuvv0j2v20N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwK3p6Dp6Hz7rcIJNF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxm0A15v0b_vTdvOGR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy2rqT4jNb_vCX7twt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw7Zf4MEgjScW-cLb14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyCVyX8NFX928rbIul4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4ud59m5ZogOPU0Ex4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-pRvIf_RgvHUNOsp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw-tKXUg_kpPS4IRJJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyq8ApFE4tRvkOG9mp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"})