Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I really like new tech and I always want to stay up to date. I used midjourney f…
ytr_UgytxyU2B…
G
As an AI image generation enjoyer, I must say:
If someone says anything positive…
ytc_UgxBIRLzB…
G
A primary care physician seeing 60-70 patients per day is not the goal. That is …
ytc_UgyLvKKdI…
G
these weak arguments are so painful to listen to that I couldn't make it more th…
ytc_UgzPXwOdm…
G
Shows who runs the shop. Smart CEO would wait for a report on automation before …
ytc_UgxY3xHA8…
G
UNIVERSAL BASIC INCOME(UBI)? HAHAHAHA! SOUNDS LIKE SOCIALIST CAPITALIST BACK IN …
ytc_UgygTjE4V…
G
One aspect of AI safety nobody talks about: how about by trying to make AI safe,…
ytc_Ugwn2IBMe…
G
What bothers me most is that "good" and "bad" are opinions.
And that the 60's b…
ytc_UgweNmjAp…
Comment
There are so many companies that completely align with the behaviour of AI in the first half of this video. If you have an objective, you achieve the objective. The collateral damage that happens is a great misfortune, but necessary to achieve the goal. "Safety is our number 1 priority," okay, the why is an employee dead and you have thousands of injuries? Its very funny that the big corpos wanna make a pact to not make destructive AI. What safeguards do they have to put in place to protect their own capital and positions? Is a Super AI based on any utilitarian prospects? What does that say for the suggestions it'll make for the environment, human happiness, human and earth's prosperity? I imagine radical change would have to happen. I have no optimism about any of this.n
youtube
AI Governance
2025-08-27T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx1H7FTORlZaRPFvYd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyhd33bsBJQ7VFV-nF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw_MuQUlVD8S3g_nnR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgydAuDndVSN6cCLMXV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwMZ3q4QX4DVeRWs2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]