Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is thay why google ai is constantly wrong, seems to intentionally try to spread …
ytc_UgyFGoKAe…
G
AI is totally biased - I kept trying to get BING AI to write folk songs about pr…
ytc_Ugx0x1Kl2…
G
Babysitters or hairdressers won't become AI. Nobody will allow a robot to babysi…
ytc_UgzA3LQT8…
G
The same is happening for medical transcription which once you setup AI to trans…
ytr_UgzpM2y5T…
G
What a load of BS! You always know AI bullshit when you hear it when they start…
ytc_Ugx7b2PjY…
G
Supply and demand. Humans demand food, shelter, clothing, healthcare. AI demands…
ytc_UgxY7aRLJ…
G
I love that you hate it. I consider myself an "artist" by definition and love th…
ytc_Ugw1t1QTz…
G
We're glad you were surprised! Sophia really does have some insightful responses…
ytr_UgzXRWUSS…
Comment
I'm reminded of reading that Richard Gatling and Alfred Nobel believed that their inventions would end warfare because of the absolute carnage that their new technology could inflict compared to what came before.
Our creations will indeed kill us all, it's just a matter of which inventions and at what time. AI looks like the leading candidate so far. There will never be a non-proliferation treaty on AI.
youtube
AI Governance
2025-06-29T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwquaengYT7QHoDOEZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFLKyHSySKN86fECh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgybcpXvEPsiR2dLplp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyIo9pPPyM6ohJVNNZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9OkmL5CKKMrrp_VB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzfnWk5mo4hL2UtfkF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgykxVOuJeZi_UF5-Pp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxI8JpxZec8Zr5NfO94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzUi4BjIMB_WSdiUDp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzCp5U50_1Sab8zm514AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]