Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Future of AI = Billionaires become trillionaires and have more control over gove…
ytc_Ugx_tp4Ah…
G
oh come on it has that clear AI shine to it. Looks fake as fuck.…
ytc_UgyuR_sxR…
G
In addition, rising sea levels can cause briny groundwater, which can make atoll…
rdc_d2zcww2
G
The idea that police are actively using AI to do a ton of shit is fucking seriou…
ytc_UgwpT0mMS…
G
As a software engineer that has designed software for AI applications and embedd…
ytc_Ughy05zsM…
G
There’s a difference between ai tools and generative ai
Ai tools are made to he…
ytc_UgwfuZoNt…
G
Ai uses sources that humans found out, therefore humans are better +ai uses up A…
ytc_UgyDJSyvg…
G
Get rid of AI . We don’t want a spy hovering over our head 24/7 . The limited a…
ytc_UgwH4GvoT…
Comment
I think the premise of the video is flawed. A surprising amount of the industrial machinery that keeps our world running would be quite efficient at killing humans. Robert Miles has an excellent video on convergent instrumental goals. The short version is that AI will likely conflict with humans because AI will want to use resources that we also want to use. Banning weaponised AI won't stop that from happening. It will keep us from getting ready for it. We should not ban AI weapons, we should learn how to manage them safely.
youtube
2019-04-09T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxwDmRv-1TI3gqTduB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLuozTlt3q_RgYkRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzCBEB2HvTDQAatp6N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwpuCDVf3uqAOjVgf14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxm_8_T4f3JRBa97yl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzm8FhTsAR23JSA7WF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQutxwzoqfvnNwotN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw3K8SFlnwelU-0KYN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxNsufN4wxM7DEn-w94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzRP22GnRtJOrgmH_x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]