Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Okay, but what if AI is faster and better than mebecause where is the fun in tha…
ytc_UgwLhW_8l…
G
Ai is potential , It's only "alive" on input . A billion others, But you gave it…
ytc_Ugzn3-I1f…
G
This is probably something where Reddit and reality diverge. Just like every oth…
rdc_enjynwi
G
Thank you for your enthusiastic response! If you're intrigued by AI interactions…
ytr_UgwHllwi3…
G
The worst part is that the AI image doesn't look bad, and that's probably a thin…
ytc_UgxT4TtiO…
G
A middle ground would be training an AI with public data, BUT release it as FREE…
ytc_UgwkdHX9e…
G
If making the art, you should be satisfied with the result of an original painti…
ytc_UgzIEHspx…
G
I hate genAI but I hope you read parts of the article where they explained it wa…
ytc_Ugz0WFBUh…
Comment
It looks like the AI is not reliable enough yet, so those professionals made a poor choice signing on to the output of an AI. But fundamentally it is not that hard a problem or at least doable. Like it would go like civil model damages of negligence on service vehicle air craft time since event location of persons etc. Then it would have to find typical strategies for problem, precedent.
youtube
AI Responsibility
2023-06-10T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgweKBRmXw7jhTGFtVp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyy4rhJG_YfCmrAAR54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwv1Vd7YhVeHiZTClx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxeTnSLM-KTkP1tmHx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxNobYELMGkPKfrzjh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz-M4qXZCyHGpvXLvJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugz6EcQ5fJuv8aCp8zV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyphYKJZtvSoUY7wdZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx4uDUBVdnQ3oPiEWd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyT596gJenGm2joa9l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]