Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Um, so if Google has a policy that makes programmers program to always have a re…
ytc_UgxQPxen-…
G
They have literal courses of what AI prompts to use to get specific kinds of res…
ytc_Ugwvy-vnp…
G
AI can be used as a tool, but if the entire process is just asking the AI to do …
ytc_UgwcOMdy4…
G
I'm curious to see what happened now that Ai has pissed off the mouse the mouse …
ytc_UgzPu54RX…
G
AI can’t learn dilla’s swing. He wasn’t even human. His swing isn’t something yo…
ytr_Ugz6zDlRu…
G
Another prediction that flopped...what has AI done for the average person? Nothi…
ytc_Ugx3sroIu…
G
There will never be morality based artificial intelligence because its creators …
ytc_UgyZwA-gU…
G
Yeah, AI is used as a scapegoat. Not only can you easily get rid of the overhire…
ytc_Ugzei8Jj8…
Comment
AI will be used to cause harm. The actors will be everyone from some black hat anarchist to your local govt using it too develop efficient methods to dispose of people in the name of whatever reason they elect to choose. It will be used to twist the truth into knots until people can no longer trust their eyes ears or thoughts. Pandora's box is open.
youtube
AI Governance
2024-04-02T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyW5t9aiqD3qVBNGV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTrh36IHregjJiQkR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxxC-c1fCLRXLrmoBh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgznRwRNVD70Qps00594AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzMdILj8KsK67H4UvR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwIENoN5Qh4T9hLJ1N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYnqRriy_rPFzqwHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVUwCzmMYK9Gfuzdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxejlR28ozyyJazTGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzHjbMVoP8m2uWkS5N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]