Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is just easier and faster to type or write than "algorithm" or anything that …
ytc_UgxfZUHq3…
G
it's not possible for a.i. to be conscious. it can only ever run code. processin…
ytc_UgzIKUC96…
G
Thank you🙏 1:50:37 Again I can say these podcasts about AI are fantastic awarene…
ytc_Ugx65Qbv0…
G
Thank you so much for your courage and strength. I never knew the dangers behind…
ytc_UgwEAbJJc…
G
I dont care about Open Ai anymore. Microsofts Copilot is way better in every way…
ytc_UgzHTqYcz…
G
Maybe the whole purpose of the evolution of mankind was to give birth to AI. Thi…
ytc_Ugy7x-G_i…
G
One for each car doing the violation. And executives should receive any charges …
ytr_UgxiC1njD…
G
AI is proof its okay to steal from people but not for people to steal from busin…
ytc_UgwtgiRhJ…
Comment
It's interesting and like everything, has pros and cons. The pros being that this could help students become more engaged in their education and have their wellbeing tracked to ensure good health. The cons are the possibility of companies and/or the government using these innocent children as a data farm and the additional pressures of trying to satisfy and AI. It's a very thin and dangerous line.
youtube
AI Governance
2023-05-02T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwNdXrhSbUAQn1BBNN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9C9Q18Usp8ICx7g54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwaCN1wQqPP1w4wU0F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxxUZy9kuNWUtul0sJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwGRs3GLf5SNbbUcch4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxJiQlC3FU_URUM__54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzo-X5FFw_zp-FSD9Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwdLz2XYDvuJnlR6E94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgznzJ5DpjIpbWV2T0Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx4s3yDfKlUfNkmMQh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]