Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not against automation. Truck driving is dangerous and it takes a health tol…
ytc_UgwHdMENc…
G
The atheist's side 'slight edge' would be far bigger if the believer AI wouldn't…
ytc_Ugw-0jXEu…
G
I told my son hes not allowed on games or media until hes 17. He will get a phon…
ytc_Ugz6cg8Ua…
G
She's offering good advice here.
One way to ask AI for help is to preface with …
ytc_Ugz-nRRPs…
G
Ya ... What you fo if that robot opens up on you... 😂 FIRE IN THE HOLE... 😂😂😂…
ytc_Ugzdn2w9y…
G
Consuming AI films and movies is like having sex with a blowup doll with your fa…
ytc_UgxEP3ALB…
G
Kaka Creations is LITERALLY using ai in a positive way, and people are STILL cra…
ytc_UgymiI6kJ…
G
If one is in touch with their feelings you realize what’s being pushed is a bit …
ytc_UgzF1y4vp…
Comment
A question relating to the [obvious] "risk-benefit" dilemma: on the "beneficial" side of the equation...just how "beneficial" IS "with help of AI making wonderful things in medicine", and by that diminishing causalties by desease and/ or extending longivity etc. - given already deplated resources the plannet has to offer? Will that "nice" aspect of AI not, at the very least, present the ethical problem of "who, then, gets to live longer/ be saved - because we all can't" etc. ? [unless, of course we manage to use AI to stretch the resources of the planets to go further]. Sounds like a quite "brittle" future to constantly, with the help of AI, ballance things. We could just "stop" with what we've got so far (realizing that life is at this pont quite convenient for a lot of people) - but of course we won't...
youtube
AI Governance
2025-09-28T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz5UqtsMNfURMiJxOl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzj05yckhhyVqQEjzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwEef799UCrebyH-tt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxd4IEGj9mq1mIj8B14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMm9lzQRoIbMu7-f94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzrvHoE_P9soha8UJl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxub36U5k8HMDjXFKl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNuW72pXJYBJ2E3-Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwlUcmSjX3ls1RUVkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyjZ98gDssRLlPZVQ54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]