Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is massively overhyped. Its is a mass delusion that is reinforced by companie…
ytc_Ugzt7PR65…
G
Can you hold video? You don't address the scaling and the amount of AI data that…
ytc_Ugw0Rkt5d…
G
Wow, she is so arrogant and dismissive of any other idea except her own...and I …
ytc_Ugx8rAg0D…
G
So ai is replacing jobs, how will people be able to work and afford food or hous…
ytc_UgxmUEIJe…
G
I've been using both ChatGPT and Claude for a month straight for my entire softw…
ytc_UgwnUXVN0…
G
The AI that you always have to talk to is an annoying MFer though. No, I don't w…
ytr_UgxLnoDim…
G
AI already does this. Decision Information Systems have been around since before…
ytc_UgySJ7nwR…
G
It never will
Regardless of whatever others may say
Art isn’t just a pretty p…
ytr_Ugx6Fn6bb…
Comment
But when we think about AI and why they would want to kill us, we're thinking from a biological point of view.. we're thinking they're going to want to be the apex predator (i.e. reference to the Chicken). But an AI won't be designed to have biological and territorial and social needs... unless we specifically program that. If we are just making something super intelligent, why should it have the same motives as humans? Like gaining territory and being superior. What motives could a superior intelligence even have without the biological impulses that drive it?
Just as an example - let's say it may need more materials to make more of itself - why would it want that - it would need its own "reason for being" to want to "propagate" itself.
youtube
AI Governance
2025-06-16T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwQ8eSwBsC_CtVA9H94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUOEkZlek8P1GptZd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYKPhC4bIVezzek3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwPRrLfbYjU65rkC2h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9rZxdbM76lfihsht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8uTBXqsg_MjAv3h54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzIhNVeP1DlCY4-L014AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwDQ-MhaxCh4OO07v54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyE-6M-aIdYY6WnSaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwxVe07_-a_RVhK_QN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]