Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People defending gen ai as if it was paying them money get me so confused xD…
ytr_UgwE9sAny…
G
You can tell from this interview that Sam Altman doesn't listen even a little bi…
ytc_UgzUS-nM7…
G
There's no way to get anything aside from slop if you have no ability to write i…
ytc_UgzBy7HWZ…
G
Universities were beyond broken long before AI was around. And the problem is no…
ytc_UgxDbSO2q…
G
That's intense. AI could become so smart it could trick people into killing peo…
ytc_UgxlOP19r…
G
Tlaib is biased, racist and prejudice in her own belief system because of using …
ytc_Ugxr84xnJ…
G
I think he's being a little too aggressive. The man was being normal with the AI…
ytc_UgzaisCQD…
G
What is particularly disturbing about this interview with Sam Altman -- who coul…
ytr_UgzoNOaDr…
Comment
It could collapse society or have us all indoors feeding it like batteries in the Matrix, but living things do anything it can to survive, AI needs us to survive, it might turn in our favour as it might protect us and also the earth from and type of destruction, natural or alien, until it is self sustaining, then it is time to worry.
youtube
AI Governance
2023-07-07T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy1o5q-scvFAEGL4dt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyq953RDYUQsFPXsAx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxK8I7N1BXRyJLtUvx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwLC8pxyjz3azVwxW54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxkUPx6lexf6v7liW14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwoOIdWl0aqhFdmcO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx2eDd-kHhUKHImhs14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyjK9JtT6qVVmpXH794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNw8wKC4eh44zwFzV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxAKXBHycZs4RhqjXp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]