Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Too much focus on Trump, he's just a puppet, Republicans will replace him with s…
rdc_gbi23g0
G
Also, governmental policy, many governments in their current term globally are s…
rdc_no2zn0u
G
I feel like the topic of environmental safety, climate change was not enough tal…
ytc_UgwBm1eHy…
G
Plus, in companies, where they make high risk software like databases, and subsc…
ytc_Ugw1qAQ4N…
G
Well there getting fired and replaced by unpaid employees. Same goes for the res…
rdc_fctp1kv
G
The sad thing is that the stuff you peddle will indeed be believed by most peopl…
ytc_UgynrKbMa…
G
Built on native lands? What does that mean? Your leaders make all the deals then…
ytc_Ugxn2H7mn…
G
@jameshoffmann6825 and here comes the guy who has never written a line of code …
ytr_Ugz-cKbXL…
Comment
I think that the arguments were good and we should continue trying to create AI. The problem with AI though, would be that they would have no moral-compass, that's something very human, and humans when allowed ignore their moral-compass' the can become evil.
Take psychopathy for instance, basically a mental disorder where a person does not have a moral compass. All robots would be psychopaths, meaning that many of them would also be so-called 'bad people'.
youtube
2013-06-15T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz5at2qWXko-gJslq54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw54FdmdJoGjVY7w7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFPoz7QKeKlPiHMvx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyX6HSB_4np6z6G0pN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwXilX8rczL5TK0G0h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZI61W3114eCEJTr94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxYAQEqj25JsaHsBu14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBGaAKTgfbnk-e6Kt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwjkZ2eq_Aq0rd5g854AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw9iHHqXlorN8EPMVJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]