Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Neil, that is one fundamental thing you get wrong about AI, most people get it w…
ytc_UgwcCq8Ke…
G
I've been doing art consistently for about a years now and this AI situation rrl…
ytc_UgxlUevEd…
G
Ahhh u almost had me with the CGI, but u couldn’t quite get the robot’s shadow o…
ytc_UgwW1G40n…
G
If you some how believe it, robots are controlled by its system so if that happe…
ytc_Ugzt9f-Du…
G
@Azgeda_ Yes, data entry and certain other jobs will go away likely, replaced by…
ytr_UgzbVgOeZ…
G
Oui , mais ses robots ont besoin de l’humain même sans une grande intelligence p…
ytr_Ugx0LQxyM…
G
People who use generative AI to make pretend art believe they deserve the clout …
ytc_UgyktDJv3…
G
3:03 as if we are aware how much work was put into a work of art.
But also, AI d…
ytr_UgxkyUSf7…
Comment
Even Chat-GPT agrees with Elon (we should be terrified). Chat-GPT: “AI powered weapons would be responsible for selecting targets, determining the timing and method of attack and possibly even launching the attack WITHOUT HUMAN INTERVENTION. AI systems making decisions could lead to unintended or disproportionate harm to civilians or non-combatants OR even accidental war” (This was a Chat-GPT reply to me a few days ago. It had used the term “AI weapons”, which I had never heard, and I asked Chat about the term).
youtube
AI Governance
2023-04-18T14:3…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzOU59ZojLMQCquohd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxQYgGEr9pkhrr-Q0d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6DD-gbXHEoqOIVYN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz3S-HxLjN9IKb1C5t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxZ1PGrj9W-oNsQi9B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVSUeF7HVTbHBq3md4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyoY5r_-RzjL8V7SId4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgycEPd4Dw9zSEwX_vl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYLpkGfUqmeclIR6p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQBEPt5cA6wgH61Zx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]