Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
bc ai is not really “intelligent” at the moment. because it can’t really invent …
ytr_UgxsTCogP…
G
I’m for AI taking over. Humans are horrible. They’ve already killed off hundreds…
ytc_UgyfE_5Gr…
G
Are you [ray-cist]?
I get where you are coming from on the job market and AI tak…
ytc_UgyZGkcHC…
G
A.I. is not super intelligent. It is all just repeats the same thing in differen…
ytc_Ugwq4FPGS…
G
As junior dev (5years experience) I agree with your assessment completely. Llms …
ytc_UgxnLqngq…
G
Someone WILL use AI for the wrong reasons...well, people already have... it's sc…
ytc_UgyHze89M…
G
@NightmareCourtPictures I know it was a joke and my question solely pertains to …
ytr_UgzB6btm-…
G
Excellent over review of the AI and was outstanding; I pray for the success of t…
ytc_UgwHNTaX6…
Comment
I think the energy constraints will do the most to limit AI. Even with Nuclear. Biology has had a long time to create efficient energy systems. We still don't have effective understanding of how it all works. AI will do as humans do destroy things before they fully understand them.
youtube
AI Governance
2025-09-04T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx_bV1jwLAjuNilkOl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxeb0e3BsIISpa6Qr54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"confusion"},
{"id":"ytc_UgwTDdEgXsZ7_fOv1OV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgztFGr4QwQqe2QA7kR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfMH21s_XWjLyY2Sx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwWStSA1qosnBpGQvR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjkcyiXxGHu13gCAt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx79llVT16gbB0P6Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxsGLDsfs5jZMktyDh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJXvUbV2lGnUpDP-B4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]