Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The end times are coming, ai will replace most of the jobs and if you want to su…
ytc_UgxWoKwtc…
G
In South Carolina, when all is said and done, will have 7 AI centers. Noone is s…
ytc_Ugy_95zCo…
G
I could be deceived by this. Just how good am I at detecting AI? I can tell obvi…
ytc_Ugy4-2KTi…
G
He is stil my enemy. Because eventually he's AI model will take the creative job…
ytc_UgwFm4yVY…
G
Considering how far from either "perfect" or "AI" the current LLMs are, this "pr…
rdc_lzamte3
G
@pooroldnostradamus Valid concern. I understand I tend to write towards the opti…
ytr_UgyupyOdL…
G
I believe that, because all these claims don't match my experience with AI at al…
ytr_UgxcIplLB…
G
Neural networks don't store the data are trained to recognize patterns using the…
ytr_Ugw5q4iHM…
Comment
I made it to 26:58, yeah this was definitely a murder - no question. Not just the copyright issue that's the problem but also studies are showing that you begin a cognitive decline the more you let it do tasks for you. I feel, personally, that letting AI make decisions for you is also harmful even if on the surface it seems harmless. We need to USE OUR BRAINS and not get stuck in the matrix where we are on complete autopilot. I've started to not let anything be automated anymore (other than an alarm clock as needed). I want to retain my intelligence at the very lest for as long as possible and build on it if I can where possible, and I hope everyone else does too. The more we rely on AI the more self-doubt we harbor for our abilities to do even simple things. Know that you can do hard things and you have neuroplasticity so lets keep it so we make smart decisions about our futures and aren't run by computers.
youtube
2025-12-20T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxf-m9N-8uKig7D2rF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyFlK_IRwLwJLa3Yyh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzumymh-FwS_cQlLTp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx15vH0REjLuv8iVZZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUPuNnZBoO91H8hxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-qYhF07jDjvih4Ol4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4MiIuJN8jFDTPsdN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwkzobPdn52A0rBH2x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxdP--29Z8OB6Ag_GJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxArj8p6-yUefWhgJd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]