Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
7:39 when I see this host’s face talking, I know it’s a quality content from som…
ytc_UgwwD0zLV…
G
Ishan, I get your point. I use claude code everyday and it blows me away with th…
ytc_UgyS_ALcu…
G
LaMDA isn't a sentient AI.
It's (to oversimply things for non-tech people) basic…
ytc_UgwIWNO1M…
G
Only if ai is actually trying to dominate us or ai is just playing around to sca…
ytc_UgyAf-aEM…
G
that first picture being AI scares me the most like now people can fake nostalgi…
ytc_Ugy0ZmNaA…
G
Robots will teach us? In what way? If there is someone that will teach humans it…
ytc_UgyR_RsQw…
G
Did not Steven Hawking,worn us all before he died that technology would take out…
ytc_UgxmJfT8g…
G
This question is so out of bounds. The answer is AI will make banks ever more po…
ytc_UgwaE6GlF…
Comment
Humans usually operate under the assumption that you can generally tell when someone is smart. And I would argue this is mostly true. This is what the Turing test relies on - if you can’t tell something isn’t a human (therefore it seems smart), then it probably is.
My opinion is the powerful generative AI sort of scams this system - I have many times been incredibly impressed with it, then handed it a genuienly difficult problem I couldn’t solve and little of value suddenly comes out in my experience. The point is not that AI can’t do our jobs but that speculating about this is probably completely meaningless. Only the test of time will tell, and so far the test of time is showing our expectations for the current versions were way too high.
youtube
AI Harm Incident
2024-08-12T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyMAqgUbwBrkqAIAIV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwDmLGnNb9A2MmzpId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzPT8OGmrDkrMUmkHV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPEVbllg8q0ZZJmN14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0bf57hLyZiThKZcl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxySjVvxSlo3lyIfut4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_vByami7VooCO6dV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzGyxTdYCpm7_5XGXF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyDuu1OJHhDouB9ssR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzEzP5BipGyhUt_JaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]