Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It really is odd how the CEOs of these companies could hire more employees with …
ytc_UgwNqBHJR…
G
Not the US of course, we will have your results back from the lab in a couple we…
rdc_fjzl15p
G
No they cannot take over first and second opinions and real advice. They can rec…
ytc_UgwbJWRgZ…
G
Admittedly, some jobs such as Amazon warehouse positions that were very destruct…
ytc_Ugw1s0jwY…
G
If you filled chatGPT with every conversation or written thing you’ve ever done,…
rdc_j5wgt33
G
I can’t wait til we get away from attributing the success of a company to their …
ytc_Ugxjf8xbe…
G
Definitely a "people problem". AI is just another tool so far and misusing it is…
ytc_UgxfDRy-y…
G
Imagine if future evil AI could also get access to all our past comments to judg…
ytr_UgxXZiB8u…
Comment
Everyone talks about AI causing an apocalypse, but what I don’t get is what would be the motivation for AI to do that? Humans invent, explore, and even fight out of curiosity or survival… but an algorithm doesn’t ‘want’ anything. Why would it bother creating chaos in the first place?
youtube
AI Governance
2025-09-05T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxN6hKiSB69pWxQj554AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-d47XuiYOFLABilB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwTWBsoEfsVr7EjvvZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwp-OQArrVuw-kKI3h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxBeBnmWlnXWYkVxph4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9q_IcR8noI6CG9QR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzgB699xfuV0XNDvqp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx_XKXrBXGaqdeV6Th4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzN0KAgCsF_D1hMAxx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugza8YILH8C0QEwXpD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]