Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My phone made me go nuts based on showing me myself based on algorithms till I e…
ytc_UgwE-9VqH…
G
There's a big difference between skill and talent. Keeping the passion alive is …
ytc_UgxmB_nr3…
G
But what if the AI wrongly diagnoses and results in someone further being injure…
ytc_UgzuEAs9B…
G
Highly disagree, as Harari has stated, it is extremely unpredictable and could b…
ytc_UgzA-xoPA…
G
CGI Ai slop against a greenscreen.. no thanks.. go back to actual film not carto…
ytc_UgzsoY6zC…
G
This is distraction. Even if AI were to become aware it would not do anything ag…
ytc_UgxN26nvj…
G
The biggest problem will be the insane (and constantly increasing) prices of rea…
ytc_UgwjhhNwq…
G
I think personally people would develop ai to be harmless because why would we a…
ytc_UgxDhaRo9…
Comment
No recognition at all of the importance of human connection in education, especially pre-college. When a student feels cared about, cheered on, recognized as a person of value by a teacher, the result can be and often is transformative. Even if AI could be taught to mimic that, the student would know there’s no person there. If they were fooled into thinking of AI as a person, that would be psychologically fraudulent and might have very negative consequences down the road.
youtube
2023-05-05T15:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwrgRiz4kZWniyrl2V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzoXdfU1Y9vXAx39qN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxsyFzWN1uIRmjwWg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjR6L4jHpRAnn1JYB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzS3i6CdjQMl_2718J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwPdxU2hMnPuqnsi6p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgybOdsb4zNGqpa2Zmd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcaUlYFDw0v70HqJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4wBjijF6bqHOplfh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw4GP_1cKWSA5vOXBF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]