Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sooo we all basically agree chances of AI turning against humanity arent 0% in f…
ytc_UgzZanMv4…
G
Well, ChatGPT said it was 'attributed' to Venkman; i.e., somebody somewhere said…
ytr_UgxMrPD-w…
G
You miss the point: self-driving cars are "sold" to us as being BETTER than huma…
ytr_UgyU2UngC…
G
@SusanSingsSongs You already have a foundation in philosophy, which is good, all…
ytr_Ugw2j4ucj…
G
Of course banning ai centers will be popular among Republicans. China isnt pouri…
ytc_UgwfM78o7…
G
i am sorry to delude lot of people but this guy oversimplified a fight between …
ytc_UgzalCrfl…
G
Tech is only going to get worse for employment. It is the industry most incentiv…
rdc_ohmxxso
G
Its because its a bad idea. Why would you want to replace management with AI? Ca…
ytr_Ugy5woM8i…
Comment
We need to have an honest and real conversation about the dangers of AI. Is it ruining people's brains, will it or can it replace people and ultimately can or will it harm people? A lot like climate change, we are getting the warnings now and there are predictions that things will get worse. So, there must be a way to come up with proper AI defense programs that will make sure that the real risks and threats never come about. If anyone says another country is going to do something bad about it, then back away and just work on and focus on proper defense should such a situation appear. How do we defend ourselves in that type of situation and put those ways into practice? We are not helpless here and I don't want people to be frightened. The best solution is perfect and meticulous defense just like the Army would defend us in war, we need a strategy and group who are ready to de-weaponize AI by understanding what can and needs to be done in order to squash any threats.
youtube
AI Responsibility
2026-02-17T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxGLiK4eg0yoNwSDfp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxINwhXsCbJcfB_lzF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4A_rohtSERDwJbOh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwhuuW1iRk7uJdYYE14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzeug4D69m3P3NddPF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw0JdU1OQ0qnuBHgD14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzAZjBR17KkscLYzSZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyNlfwHcNWCIYGdwX14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwrzlRDwNeXACGF86F4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyMJsWhfwzoJGOKOtp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]