Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI chatbots are some of the most disgusting products of all time. They're design…
ytc_Ugx6qfRL6…
G
Question for Dr. Yampolskiy: Is AI determinsitic, hence without free will? Since…
ytc_UgzSjRBb1…
G
What i see happening isn't so much individuals being replaced directly by AI, it…
rdc_nns6xch
G
They'll still use it, even if it is illegal. See stingrays, and thermal imaging …
rdc_euefcmb
G
I hate AI!!! AI will destroy us humans, evil has and will continue to take over …
ytc_Ugz-muDUd…
G
How does she just keep getting hotter and hotter? And now she’s passionate about…
rdc_nm1xiin
G
Let them stop the whole A.i… people will die regardless in terms f medication an…
ytc_UgzGa0GPi…
G
It's complicated technology so I can't blame you (or the countless others who've…
ytc_UgwSQfh1g…
Comment
The real problem is the automation. But we don't really have any laws to protect against automation. People have been losing their jobs to automation for over a century. And it just gets worse to more sophisticated the technology becomes.
Traditionally the argument has been we're making work safer because people no longer need to do dangerous or monotonous jobs. But now that art is being threatened by AI. People are realising nobody's job is safe. And we're really only scratching the surface of what large language models can do. We haven't even touched on what general AI will be able to do.
People should be concerned because governments don't have the legal infrastructure to safeguard jobs.
youtube
AI Responsibility
2024-06-03T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyeGX5MaxSW3R_b9454AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZyxPRRGeAOIyMq3N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF_wwFDgoaDzyDJgN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_dwQPySUXeno5_I94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNwWNLaAt3lGabXsd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyWrYsWuQ7KEcAYHAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwh2UMC7soTaYtrWI94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwX4ccdc_wu7ZG29h14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxP2cef6ITk6cGOpR54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJWI5WYfUQcfMzamB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}
]