Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They need to create a low if a robot kills a human being. This is crazy. Why are…
ytc_UgxqebWfp…
G
My concern is not how AI will evolve and become powerful, but that this power wi…
ytc_Ugwtl0mCS…
G
Every time I learn more about AI, I become more certain that AI developers hate …
ytc_UgzN35NHB…
G
I believe AI is also overstepping in creative writing. You can literally use AI …
ytc_UgwW1oafj…
G
Tobe honest few years back would have not take these robot movies seriously and …
ytc_Ugwgr9f3-…
G
Agentic AI? Sounds like sci-fi, but Pneumatic Workflow's already helping us auto…
ytc_UgyXKX_S9…
G
Honestly a great solution to this, is instead of the AI blending multiple artwor…
ytc_UgwiVnZUM…
G
lol cope and seethe if i recall about 1 year ago AI could not do hands or eyes w…
ytc_Ugzrx8cIb…
Comment
Companies who are developing super ai are doing it purely for money, power and to be the first to the table!
As soon as super ai assesses these companies and their motives it will know what kind of species it is dealing with.
Humanity will probably be looked on as a species with too many failings to be worthwhile!
youtube
AI Governance
2025-10-17T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxW08Xq39gQqV1wvZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylZyxHFUoy5iUQpLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyYkY30dO3UpgumFHt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyUMoZkL1QWBYiwIqN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxTmW4b-qqsD80I-R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwtfZW4atAQDEoywDt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxh1LyuB_XoXjjuT0t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzW334WqoFQKyXKvUx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugznva-rv-5KzM4IoNR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyT2y5NdFvHvDNLHBx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]