Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai tools will definitely mean no more programming jobs. And Satyanaass Nadella w…
ytc_Ugy09zxuX…
G
Unfortunately the AI executives have already figured out how to create infinite …
ytr_UgymC8qr0…
G
While I agree with a lot of your points, I do want to say that AI is nothing mor…
ytc_Ugz2ObzXQ…
G
I'll keep the LLM on the other monitor for now so i can build a mental model. I …
ytc_UgyUnf-ad…
G
I refer to them as AI generated images, that is what they are. Regurgitated imag…
ytr_UgwWrhC_5…
G
@ItsNikoHimSelf ChatGPT is not a reliable source as it will always give you an a…
ytr_UgxDtz2Ju…
G
AI is not replacing jobs and it is financially unsustainable unless it actually …
ytc_Ugx-YZDTO…
G
Ha. Now software engineers have AI to blame when something goes horribly wrong.
…
ytc_UgzpmCfVL…
Comment
People need to look at this dilemma from a totally different perspective: Are the odds of AI destroying humanity is greater than Humanity destroying itself by any other means such as nuclear catastrophe or anything of that matter? On the other hand, are the odds of AI actually preserving humanity from destroying itself higher than the odds of humanity managing to survive on it on?
youtube
AI Governance
2025-09-07T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugylh35WqrsE9OGrKeh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxILDl40fY120qgr014AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxFPIvKy3oq3kAMOft4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz09XTiu-w-wVE3SHF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy4APaWuPnIm8L9Bvp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxpdvgFLfRqRTiDfT54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnPJjxSIxw_eQOoxB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPh5-twXXOoqP2jSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyXkwor3DSun0cbFwh4AaABAg","responsibility":"media","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw_tGB4Q9aOhp9DUBd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]