Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"They asked my ai to recite their own work back to them to prove that my ai stor…
ytc_Ugzd9rEK0…
G
At this point this is wear I want to Take an Axe to both the Inventer and the Ro…
ytc_UgjUMK6SQ…
G
Love how ShortlistIQ fits our HR systems. Makes adapting to AI changes in recrui…
ytc_UgzlTWpSM…
G
It's one thing to play around with A.I while crediting the source but, people ar…
ytc_UgzA4u-AJ…
G
If AI can be programmed to have "MORALS", than humans need not fear... program t…
ytc_UgxuqV95D…
G
Using AI to make your videos about the danger of AI is the real danger here!…
ytc_UgyTQQw9H…
G
OK at this point I just think the person that are answering these questions are …
ytc_Ugz15w7Te…
G
What I keep seeing is that humans and the entire biological universe needs a com…
ytr_Ugwpkg-Rb…
Comment
Im afraid that the AI usage will end up worsening skills in the long term as seeing the answer isnt the same as learning the answer. If you simply ask and use the answer you will never escape blooms taxonomy's level 1/2 (recall/understand) where the least amount of learning happens. In the past you had to evaluate (level 3) the sources you gathered, analyze their usage for your goal (level 4), and creatively (level 5) integrate it into your code.
Each step up in that heirarchy of blooms taxonomy leads to exponentially more learning, more relationships between concepts created, per unit time. So, unless study technique becomes a core part of software dev the overall quality of devs will suffer massively.
In short, prompting =/= learning
youtube
AI Jobs
2025-08-18T17:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzp5sTJucvD5Dnze8V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzJZHl10745hkxXetx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxj6ag3fVReAFxxVIZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0sjkawo5pZPmpfp94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyI-AZLCEK5IRllpnx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyWFgQST6v2tKVuxUN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCOjEuGf_ujNSzx8t4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugydn5nFt8jNU0bmfPV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzx5Ht3X20k1zWWDd54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxR8r0j-FRQ67CTTfN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]