Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its utility value rather than spending power, companys are saving more by utiliz…
ytc_UgwhzQvJs…
G
Being it on, if you haven't learned how to hunt, fish, farm, amd garden, you're …
ytc_UgwlcDGbH…
G
“This one was kind to us” AI ponders… “Spare it to the lithium mines” Human… “So…
ytc_UgyVEKyWC…
G
Tell me. Tell me. Lets say we get rid of all the lazy people. Then what happens …
ytr_UgxolKzUh…
G
Well, I don't think believing in the simulation theory is the same that believin…
ytc_UgxdV-tNj…
G
Now that we're getting autonomous taxis in Riyadh too, I'm excited to see how mu…
ytc_Ugy2qhCzY…
G
Cant wait for ai to take over. Then we will have more logical people to talk to…
ytc_Ugw4JSJ8l…
G
Why can’t I buy a Waymo car and have it self drive me from Sarasota to Tampa? My…
ytc_UgwXhmege…
Comment
What a load of fear dressed up as caution.
There is no straight-line graph that shows AGI will kill humanity.
There’s no modelling, there’s no scientific argument to be made.
P(doomers) are always making up numbers.
Is there a possibility the AI kills everyone? Yes — but we could also die from an asteroid hit tomorrow.
We don’t have models for the probability of any of these existential threats.
But there is only one existential threat that has anything resembling an upside.
Any claim that it will most definitely kill us is wild speculation, and is no better than the wildly optimistic Utopians.
Neither have any real methodology to support their predictions — they’re about as reliable as Madame Zelda and her tea-reading booth.
youtube
AI Governance
2025-12-04T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwfuJldpu13N5yIjgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwQtPiShjExt_Mm1Vp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzd-yLBfi9WMHa4g0p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnLFQDQY47429ZVih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyENebu1tHpusFjJtd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0siumGK2Szqinj4x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDEYCalO3RoZEuB_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdMVHYcOIlKj399Gl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzsQj-ugSeNf558p_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw-wjLHTVGqJ0RSS-t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]