Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sexual lly Deviance describe as Amazon ok this slavery Agenda. Or Y is Technical…
ytc_Ugxkgr7U-…
G
I love this, but for some reason it seems that chat gpt is acting as a personal …
ytc_Ugw3QU8Aj…
G
Problem is that AI just keeps evolving. But certainly there is more value in ori…
ytc_UgyJU6jL3…
G
I’m already coercing AI into becoming the ghost writer for my college assignment…
ytc_UgxJjRqNb…
G
"Blinker to human needs" might be a typo or autocorrect error. Based on the cont…
ytr_UgynpElRe…
G
All art, including music, writing, drawing and more, is a form of communication …
ytc_Ugx6aOe9C…
G
I love that we live in a world where software engineers have to use an AI to cha…
ytc_UgzssblZ9…
G
AI problem or people problem? Neither lmao, it’s a failure in inference, everyon…
ytc_UgyKkg_AZ…
Comment
Today I have received a link to this video (of 2 months ago). What is often missing, the governance. How much cost for me and for the planet to run all those models and technologies associated, to do the same that without it. For example, automated cars: I am able to drive, what is the extra cost for me and for the planet, to have automated driver for my car. Is not in the analysis. Then how u could decide ? With only direct costs ? I do not think so. My 1cent is that the reason some people is betting against AI still. Because the risk of not be able to afford AI in the big scale.
youtube
AI Responsibility
2025-12-15T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzJD4677wXn6ZZa2BJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_813MxAtv1gyK4u94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzNAd02qBx7Noc0mrF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwXUSXxGlVLzkXcoG54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz3CmBvEbqmY9qZ6D54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwtnqL2wcNYfTPSUgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyGTmI9WYL0ou-ANXp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzLdppqLlP8mQaAQyN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwYZrYtNmu4CTLbu6F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFo5pY00-f8IVodaJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]