Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Mind-blowing! Chloe represents a giant leap in AI. Can't wait to see how self-l…
ytc_Ugzg0EhQJ…
G
still the realm of science fiction. the hardware, power, and cooling requirement…
ytc_Ugy3iWUKu…
G
The only AI I really support is a food industry in Tokyo where its staff is NOT …
ytc_Ugzw-_0nn…
G
The real evil here is not the robot, but the people that programmed it. I think …
ytc_UgwVzybAm…
G
I like how they give opportunities for the kids to use up their excess energy bo…
ytc_UgxRVClOM…
G
In the interview the robot said he whispered in robots mic he told him His Moth…
ytc_UgyCkwACZ…
G
This automated piece of shit is just there to frustrate you and raise your blood…
ytc_UgxMsVuS9…
G
One of the biggest assumptions is that AI will be infallible enough or managemen…
ytc_UgxHJfqg5…
Comment
This film has totally overlooked the human spirit — the need for people to have a sense of worth, a sense of achievement that comes through completing a job of work, through creating, building, producing or providing a service. As an example, if the retail market is swamped with Ai novels it will rob creative writers of an outlet for their work, a platform through which to showcase their writing skills and measure their success through sales. Contrary what this film tells us (at 4:44), I do not believe society will be happy living the Ai life the BBC is predicting here.
youtube
AI Governance
2026-01-28T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyGWzCwGHlpdE78-Sh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyj8NDS4NEtXgvXvw54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxAkMR4UegI_aip3U54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy9EwhYKlzoBU8Ku3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvLgVtfeFuPxGoNNh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAlJn5pQuqto7bzXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzi9_4dkzB2d9gMpnN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhDOYVkkd0cWYQDC94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzgYEGEqsq4oaH5lP54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvInPQihlLeWQX9s94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]