Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
radiologist must review, verify, and sign off, i don't even think it would be le…
ytr_UgwtKR8V8…
G
I mean, if you think about it, AI art is basically people taking credit from the…
ytc_Ugw5HvZ9z…
G
heres what you morons do, china wont slow it down, so we slow ours down by contr…
ytc_UgwyOXNOL…
G
i opposed to have robots be independent no no no AI MOVIE WOW AND OTHER …
ytc_UgyE2UnqM…
G
“ChatGPT, that seems like a logical fallacy”
“You’re absolutely right, that is a…
ytc_Ugx_3MT5d…
G
@HowToParentAI it's nice to know that your life has improved because of talking …
ytr_UgzXjCZlr…
G
On top of that cheap manufacturer's are now claiming that anything that responds…
ytc_UgzqbKp4l…
G
Also AI isn't just learning from images online. It's being specifically trained …
ytc_Ugx-KZjOH…
Comment
The advantage that humans have currently in application development is that humans have to secure their job. They can't f*** up too many times or they will be removed. This constant reminder pushes them towards responsibility, consistency, accuracy and timeliness. The reason they code, fail, learn, code succeed and deliver on a daily basis. AI will respond with 'I do not know' because it doesn't care if it knows or not. AI will still suggest things that may not work. It equally doesn't care if it is wrong. No pressure either way because it doesn't have a job to lose. Humans keep the system honest by necessity and necessity is still the mother of invention.
youtube
AI Jobs
2026-03-11T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwNsc2j87xj_d6K9sp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxaPOl4MKwV1_L1cmB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzgml00kvFFaYkij-R4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwfBwlXj8E6cRIVHPF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugya5c2gsguTds5XTa54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzm7i1eNUUH_UCmy794AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzrodSy5xUDiVgtajl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwvHT5uye2tdNn7zyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy24qDFGUO1e-lY_td4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyan-UTjFf7BTpRkCd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]