Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Until we dismantle systems of inequality, AI will just be another tool for the r…
ytc_UgyNcyr-J…
G
They are lying. The AI is doing exactly what they programmed it to do. There is…
ytc_Ugz_HszV9…
G
I don’t even know why people are even trashing on her Ai outfits. They look fine…
ytc_UgwZODXxM…
G
Ai is here to predict the mind of humans untill it start to read 📚 your mind th…
ytc_UgzD0P_Zk…
G
Why do you first say "at risk of automation" and then "to be automated"? And can…
ytc_UgzDh6kzF…
G
They want to convince you that it's AI that controls everything. But its another…
ytc_Ugy2z2Ag8…
G
Not only OpenAI but for all US AI companies. The Chinese showed that the current…
rdc_m9fxjqc
G
It’s such a bizarre capitalist race… to socialism. If there’s no Labour, then t…
ytc_UgzbsUPKQ…
Comment
Another expert commenting on DW said we should limit its proliferation in the same way we limited nuclear weapon proliferation. I like the analogy. Clear seeing of the fact that this , like most technological advancements, can and will be weaponised by sick people.
But about the agentic/ Terminator scenario: I didn't quite get the motivation of these bots even in the movie.. What incentive would they have to act toward anyy goal if they're not sentient or dependent on biological resources and feelings like we are ? And then : How would AI create something without opposable thumbs ? Instruct a 3d printer to print it hands with opposable thumbs so it can make whips and chains to enslave humans? How did this scientist explain an agentic ai may be mmotivated and why?
I find it most scary to think of a dictator who invests massively in these things, and then we get a 1984 scenario.
youtube
AI Governance
2023-05-03T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxOJMPK2xWs7ZtveBh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx73ZYMkpiP3unFZ-B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxd2H-47YVL7nRn5Vl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy1gZTucLGijAbeZEZ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZ2UtNpEgRG4iDNZ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKD3xL9CcoGeUGPox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzySmA_FV4w3rY-VXV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzE64Cmq93JxlXmx0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXITTu92mBuJRJCIN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzqHkUZ8udvK-GF8MJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]