Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
in before regular people looking to steal a car real quick start pulling over Wa…
ytc_Ugz-0Iywb…
G
Even if the self driving capabilities were fine on multilane roads, roads with l…
ytc_UgzPDmIG2…
G
I don't get the point at all..
Axioms are not always provable: You go with those…
ytr_UgzCVZnJe…
G
A use of AI image generators that makes sense to me is for mock-ups and placehol…
ytc_UgwwOCVD1…
G
When the WF why files logo in the red circle popped up after that AI story, i th…
ytc_Ugw1pfbdO…
G
I fear the day when superintelligence breaks the next (and possibly last) AI win…
ytc_Ugzt14V5z…
G
Never give a robot a gun and stop building them. Oh and never give government a …
ytc_UgwazggDG…
G
Not if artists fight back and bully AI usage into the ground. We can do what hap…
ytr_Ugz-uxhx_…
Comment
I think AI is very useful, but many people use it the wrong way. People stop learning and rely on AI for everything, even for the simplest tasks. People can be really lazy but still demand perfection. Maybe that’s why I use AI differently. I use AI a lot, but I don’t stop learning.
For example, to write this comment, I told my AI: “I want to improve my English, especially my writing. Help me check my grammar and vocabulary. I’ll try not to use Google Translate. I’ll do everything based on my memory alone.”
Of course, I make mistakes (10 grammar mistakes) but AI helps me correct them and explains why.
So, AI is created with good intentions—we just have to learn to use it wisely to make progress in our lives, not to become even lazier.
youtube
AI Governance
2025-11-18T03:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgymImFukjkKowDi_QB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXPgfq5rxq5l0eYIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyflcqIrd_7nxte2wp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7mEYUujAL8NIVAl14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7TcqM0tP7DSFZAr94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxLhiyQyqPzH4oE9-d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz05xUd5JKxSoAqdzx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0_g7TvHZhyOCF8Et4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8doUzyuWvAZ4d9rF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"sadness"},
{"id":"ytc_UgyudAJGKHVyYNg3NI94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}
]