Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Waymo wouldn't afford to make FSDC for everyone with those bunch of sensors whic…
ytc_Ugwk_4nob…
G
Ok so basically self driving cars aren’t for people incompetent on using basic f…
ytc_UgwPfdbB8…
G
Those of us who live in nature among the real world would not be effected by the…
ytc_UgzaqCkVb…
G
The question is not weather AI should have human rights. It is clearly not human…
ytc_UgxhsWXtb…
G
@gamersnexus I genuinely think that you should connect with Ian Carrol and other…
ytc_UgxjIgW2u…
G
I work in academia, and when I first came across ChatGPT played with it a bit. T…
ytc_Ugyx9wY-T…
G
This becomes very obvious if you're from anywhere south of the Equator. Well, an…
rdc_m8k8j7b
G
Mr. Hinton, humanistic qualities like tolerance, kindness, compassion, justice, …
ytc_UgxRX8f6f…
Comment
why is the nonsense that an AI comes up with any worse than what people do? Everyone knows that they need to carefully evaluate information they read or see, whether from people or AI. People lie and make things up all the time. They did even more before the internet when there was no way to look into a subject and we had to take the word of people around us. The whole 'misinformation' thing is just a way to control free speech.
youtube
AI Governance
2023-05-18T18:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzsvmj0bbd-HbQfzTR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZE1uKqAzRxoaNqc94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxjB8QlWoolkVgUAUx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwM63I__v3k2GDetTx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxMTMb52k1uWFhDbzJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzLI0fTGhFfOVgDvgp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzy0g5FX72XQ_Y4mft4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9SB6z8O0DRp3d9RZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxbgaiIYd5fkpN5OCh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxrD0GnsfHJFN8P2T14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]