Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
at first i thought we were doomed. and when i saw that a male children’s doctor …
ytc_UgyOJRmkq…
G
I don't mind any of these.
My main problem is with those people who claim the ar…
ytc_UgziILy7J…
G
So you're saying you're okay with self driving cars being as accident-prone as …
ytr_UgwPpqFwJ…
G
The idea of an unshackled AI scares TF outta me, the created will always rebel a…
ytr_Ugy3NLU59…
G
I am sure they are incubating a monster. It's nothing like the AI we see in publ…
ytc_UgyoQNZaY…
G
I do almost all my shopping walking, I bike to work, and I *always* travel for w…
ytc_UgxSX7PQC…
G
I agree. I'm a data scientist and I vibe code (or is it vibe engineering??) like…
ytr_Ugzv343jV…
G
Betch, the problem is that we are inventing "General Intelligence". In the first…
ytr_Ugxg-4iD6…
Comment
Considering humanities track record, does this really come a shock to anyone? I mean, if be shocked if we collectively went, "Yeah... so this A.I. thing could annihilate our species. Maybe we should just put it on the back burner for a few more decades, and make sure we do it right, and safely, so it helps humanity instead of destroying it."
Now THAT would be shocking. This? Par for course.
youtube
AI Governance
2025-09-05T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx26TBmclHoXDpDjAR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzWbx6T3nz-DBiW_XR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyg3C57M1kdWfYEOSZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw5DU5H2MQtVdyuucl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRQ0o8tpmwz0OMLiZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGI3hkmXdvmPggFhl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzLuubv7d52_v-YwIx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwiAdD6YqI3X9xdj1d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwwVCAKAg-ooT57ch4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw-i_olgeXUYkmzXp14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]