Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I do believe AI is dangerous but where is the accountability for adults at minim…
ytc_UgysmBY68…
G
there is no human intellectual labor because intellectualism is about thinking n…
ytc_UgxuVtABW…
G
The FMCSA cannot both be for safety and also allow autonomous trucks
Uh, buddy…
ytc_UgyhixFp5…
G
My former wife just last her job at Bank of America to AI and she is freaking ou…
ytc_UgwtnlUoy…
G
AI uses ten times more energy than a normal Google search. If you do say thank …
ytc_Ugw1y3DCt…
G
I am sorry, but your description is too much simplified.
Gen AI can replace many…
ytc_Ugy2AR7uf…
G
The video is not real! Each attentive human driver could have prevented the wome…
ytc_UgwZGJQVF…
G
I don't get it? What is the benefit for them to AI an uploaded video? Somebody p…
ytc_UgzDErqxc…
Comment
The idea of the “sweet spot” makes sense. For example, teaching kids how to use AI responsibly and with optimal outcome. Can they assess things like: What makes a solid prompt versus a poor one? How to check sources to verify the information AI gives. How to assess whether an AI tool is a good one. Used correctly, it can cut down on the time it takes to research a topic to write an essay. But the essay shouldn’t be the end goal. The end goal should be: can a student digest the essay’s information and arguments enough to get up in front of a classroom and explain it - so that they can go to a dinner party and talk intelligently about that topic? Did they absorb it?
youtube
2025-10-25T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyKhb16sc90qxIJVfV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugx_RPgN7XZgAQk9xAl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwI6O1Pv_7NMwOaShV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0E--WqpH4rSZjgdt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSK8MS3f40Iq7sSdd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwISXdYTiOiSIHLt7l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzGL--tdWqUVZK_79B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGmAUlvedfXi59mnh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx579TbWIy4YPMcZ_N4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxER7FXzka_rXeCCwJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]