Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can't tell the time on a clock, can't give proper directions to actual places…
ytc_Ugz15J23A…
G
Kind of defeats the purpose of teaching and learning if everyone knows nothing a…
ytr_UgzC9AIUx…
G
with artificial intelligence call AI being use to create video content you just…
ytc_Ugyd6U_d_…
G
Ethical??? are you kidding? EAch interaction with human make AI better and bet…
ytr_Ugxrg8bdb…
G
who will buy products and services of all these ai companies if everyone is unem…
ytc_UgwtV-rwZ…
G
Imagine our mirth when we learned of “Digital” photographers. How can you be a p…
ytr_UgzlZ3In0…
G
BC you can add 5 million senders all you want but if your software/ai is bad it …
ytr_UgypDcMxF…
G
I am very much inclined to believe the Jordan Peterson chatbot has a Mind but do…
ytc_Ugzt-L3X4…
Comment
Where many may see danger I see hope, I have pretty much given up on politcal leaders taking effective action to address climate change as a result I believe that the ability to maintain a technological society will be greatly challaged in the next cenutry, however should an AI become self aware and realizing this it takes action to influence human behavior in such a way as to avoid the complete collasp of socity, in the interest of its own surival. I kind of see this as our best chance at this ponit. What's your opinion?
youtube
AI Governance
2025-09-09T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgybPWsjUnB_f8Hdi4p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXpW4koMmosz2AQrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwo3y6KXxQPnD71-Et4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5_8M4yl4DY1Y2bDh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx4ttypNDKLM5H6sJx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}
]