Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
7:14 um no. Using steroids in the gym would be comparable to something like abusing adderall in school. It'll help you perform better, both physically making it easier but also making it mentally more desirable, but at the expense of being unhealthy, addictive, and losing some of that progress upon discontinuing usage. But in both cases you actually can fast-track the growth and keep a good amount of it when you stop. AI being compared to lifting with robots seems very apt to me. If you're fully using the AI and not actually doing the work yourself at all, that's like having the robot lift for you. The work is done and you experiences no growth. If you partially use the AI and partially do it yourself, it's like using some kind of exoskeleton assistance for lifting. You are able to lift more weight, but your body only experiences a fraction of the loadand so you only get the growth equivalent to the load that your body experienced, not the total amount lifted. Would have gotten the same growth if you put the same amount of effort into lifting less weight by yourself. This is an EDUCATION problem, though. For education the goal is personal growth, not total work done. Doing the work with AI doesn't help at all with that. For employment it's the exact opposite, it's the work that matters and the personal growth doesn't matter at all. AI can still be useful as an educational tool for things like curating educational resources, but that's different from using AI to do the actual work for you.
youtube 2026-01-05T12:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxJW6TaQGicvxJ0dFd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyjDo_tH5gYcB-5ZPJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwS6kLE7NTFkT3XAOJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxPXVK0xSXVgxKvha54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy01tNXvL9ETGC-8EB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzhq_OvpDmiRdT6S7x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzTgxGvOI8kj06lnSh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugzxh1HucVaoSPlNmXx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxBSGJ8YYVLm6j1Yol4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyNzM929dbEQOC1NJR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]