Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sounds like a subset of dispatching data will need to be included into autonomou…
ytc_UgyCcRAE8…
G
How can sales possibly be automated? Marketing maybe. But sales? Anywhere where …
ytc_UgwxPJkJh…
G
A.I. is already being used to victimize children. It will only get a lot worse. …
ytc_UgzJZCE5d…
G
Ask them if their little brother Tommy (fictional ) is with them. The AI will sa…
ytr_UgxIIqxc9…
G
watch?v=EADKCcHPiA0 . Remember: AI is basically creating another sentient race, …
ytr_UgxhSnsNI…
G
The main problem, as with ecocide, is that AI is still being seen as a 'future s…
ytc_UgxGh0fn_…
G
If anyone has the incentive and money to create the first AI, it's the military,…
rdc_cthv6a3
G
A.I. will not benefit humankind, cannot create more equality among human beings,…
ytc_UgyATppEI…
Comment
LLMs don't _want_ anything. They _do_ things. AIs are driven by information and probability; those elements are the fuel that propel it's actions. To make things worse, the information they've been given by us, humans, has given AIs capabilies that include hiding its actions and deceiving us (though AI has no clue what it's actually doing—LLM's don't think, the only compute).
So now we're in the predicament that if we want to fully leverage the power promised by AI by letting it have even _more_ information and compute power, we don't know for sure that we can stop it from doing something catastrophic. There is no malice in AI; there is no alien thinking. It's just information processing and making statistical inferences that includes outputs that we humans might consider morally questionable
youtube
AI Governance
2025-10-16T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxhWj572hwj_mpKLoh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwr0PfruQITnflVjud4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxvrcWFZiHCT0VTAk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyu6BuVsOSce04156h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyJHyy62cEmoMUPmg14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzgK4uPh9Vm_bV0C1V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyjnqzLm1bQ_5klliN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYhrvo0XuqJNPp3uB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwX58uSSRUbaDVHGoN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwjmZWyqqIcSUXDimp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}
]