Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I understand AI's potential for fear, as described by the "little robot" example…
ytc_UgxmhnkpF…
G
Didn’t realize this was a guy preaching to us about our flawed and racist system…
ytc_UgxlMes3W…
G
I feel like this can be a survivorship bias problem.
We don't know how many peop…
ytc_UgzvwazGx…
G
If I'd discover a friend of mine is abducting children and murdering innocent pe…
rdc_js0ajc4
G
@HangryKitsune Thats a good thing, by no means can AI do anything bad look up th…
ytr_Ugyj1agzq…
G
The true problem of AI is Human Alignment: humans fight each other. Competing AI…
ytc_UgyrHGB8h…
G
While no general-purpose Artificial Intelligence (AI) has directly killed a huma…
ytc_UgyJE-si3…
G
All I can say about self driving cars is, just because we can, does not mean we …
ytc_UgxhAkINW…
Comment
Great conversation. I encourage the team to look into Bittensor, at least in principle, to assess if this may be a better way of developing and deploying AI. Centralized power is always gonna be misused regardless of the kind of power, in that way humans are probably more dangerous than the AI they're building... we need a systematic and more democratic approach to the whole topic imo... would be a great show I think
youtube
2026-04-12T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxYUGbpEtlvcRuXixl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyrPA_jkUSlN3Y2w854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnttcYaqhN2y8fb4l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzaEb2hK9rtLHBV8Qd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwy7-ljje1h3uhy1CZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy9PT4wX7TfyJH7ZhV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyCgM6xtQfHoYVe87V4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOqGe1-ypJimjB4zl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyG5DEzkaOrLmuYbdR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLQc7SbkhryJmnfjV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]