Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> "AI is not true intelligence — it's a reflection of human thought, built on ou…
ytc_UgxLRD2xc…
G
If you're in NYC, flying a drone at all is illegal, as I understand things. Ny…
rdc_kw847q0
G
@Solaria-MarinQMy point is that many people that are against Ai simply go to ins…
ytr_UgxtWVyYV…
G
What's the AI name I wanna use it on my friends, also womp womp u shouldn't have…
ytc_UgzaIoJ0T…
G
AI should be programmed to have zero self-preservation and to not be able to rep…
ytc_UgxIwPDKu…
G
The danger of Ai is the collective algorithm… meta ai put me under a damn simula…
ytc_Ugyj2Mh2N…
G
Its so ironic that gen ai wouldnt work if artists didnt exist and if all artists…
ytc_UgwEf4IHp…
G
@katliit9442 I disagree. I didn't say that it was 100% AI's fault, but I don'…
ytr_Ugz9gYdZY…
Comment
The risk of a paternalistic super smart AI is is humanity being husbanded or eventually 🧐bred like cattle. From an AI that would view humans with enmity our outlook would be quite grim. We humans need to be very guarded about our leadership trying to gain complete control of us using AI
youtube
AI Governance
2025-06-20T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyqS0KHa9rLuRDmJ7N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"concern"},
{"id":"ytc_UgxrPOUK6AUSfyvMq0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYwkvmsPln3gXzPGR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaB9KGfGdaGQcrjP94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz1b9GcOKGyEZ_zoUR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwvbNonrkA3uwvEdkF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxA16LCaTxjQBvaFR94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzZY5SdAHSFHnHOnD14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxzpOCNNUDRc0f6h5B4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxnVaW8bXswnn94hKx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]