Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We're witnessing all the bone-headed policy rhetoric used to rile up the GOP bas…
rdc_e2vt76w
G
Here's my thing though; I hate humanity. I especially detest the vanity of think…
ytc_UgxM8laBM…
G
Ai is experience based, how much data comes from poor nations of Asia. So data a…
ytc_UgwSZWnwa…
G
According to my AI, disassembling the Moon would be catastrophic for ecosystems…
ytc_UgwSHDpP8…
G
i know there's a lot of head scratching, but what you should do is _try the mode…
ytc_UgzZHZMoB…
G
like even past that , I figure the odds of being hit by a drunk driver currently…
rdc_eczhzw7
G
How do company"s that replace their workforce with AI sell their products if noo…
ytc_UgxQA5EVr…
G
People who say AI can write code and replace engineers are the ones who don't ev…
ytc_Ugwd6hiGO…
Comment
Here in Europe legislation on AI is going much futher than in the US or China.
I agree on the part that we have to set rules that force severe testing, stress testing and, well, testing and limiting for a lot of time and depth for next iterations of AIs.
Just as we do with new vaccines or medicine, AI has to be considered as such importance.
Also IA has to be banned from most military stuff, and no IA should take decisions on high impact topics.
AI is an excellet tool for science, medicine and others, and a tool must remain.
Also is and will be an excellent advisor for high areas, such as politics or even military, but not a decision maker by its own.
So, yeah… press your politicians because private hands will always put profit before chances of trouble. If it can be profitable they will take the chances much easier than public institutions.
And if China wants to risk the consequences, its up to them.
Eventually, those “consequences” will happen. there is no escape, all new technologies like gas, cars, electricity, printing press had their doom-sayers and also had their “broken arrow” incidents that forced us to limit, control, improve.
AI will follow the same path. Stuff dangerous (i hope not for humans) will happen and then we will react.
But if that can be prevented or even evaded thru careful testing and designing, the better for our future.
youtube
AI Governance
2025-08-26T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzxhyJPjFMsVi_d8wx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRY6Ledhlx7x4Tkq14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyUmtb8E8SWqLsflj94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwIp49f0pWyggDuvW14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzpEJ7k5p0qqSr1Ji14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]