Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anthropic is way better at "safety" than Musk's Grok and no worse than OpenAI. T…
ytc_UgysDTfOE…
G
Who ever destroys AI, and I really hope it does get destroyed, should be awarded…
ytc_UgzXsQJnL…
G
When I was a young kid reading Sci. Fi. and came across Assimov's Robot series I…
ytc_UggsHT54J…
G
Asking an AI program if it believes in God is so funny to me. Holy shit the Geor…
ytc_UgxeYF8t0…
G
But this AI artist now has more exposure (and presumably customers) than they co…
ytc_UgxjoS2XA…
G
You lost me as soon as the AI said bible that's literally a fairytale 🤣…
ytc_UgyUB5JqT…
G
One day it’s the AI apocalypse as IT takes over millions of jobs and the next da…
ytc_Ugx1afVWe…
G
I get where you're coming from! Sophia's responses might seem straightforward, b…
ytr_UgwSETdkO…
Comment
AI is not a "bubble." Calling it one fails to account for all the synergystic elements speeding it's development. We are currently past the event horizon of the singularity's gravity well but we cannot see outside it. AI is a symbiotic entity at present, neither expressly benign nor malevolent. Any morality assigned to it is based on "human input" and human actors. This entire alien intelligence will emerge so rapidly we won't have time for our heads to spin.
youtube
Viral AI Reaction
2025-11-05T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx9SY8_0t9C3zKZueJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw4EcaxwhUQWqzJIDB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyxeT38ZKf2a9HJ7zh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgytyJTIlqCpZNQ5hXF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzz2EZv5YzZOSk1m9l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwNvoUCT3-sG5zk7mN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxqSnq8y4ngJ7tVB-B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyalEes2yixf_X1Mnt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyDLRGiwx_3UYwjuZ94AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwC3LvAqyO85JlCeQl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]