Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My Opinion is if it comes to that that AI can do most of the Jobs you should not…
ytc_UgyO0CWHn…
G
I actually do have autism and I do not like AI at all AI all it does is take peo…
ytc_UgwyDVlZn…
G
LLMs do actually so they don't know more often than people give it credit for. I…
ytr_UgymcRj0D…
G
Watermarks are the solution, it has to be clear images are generated by ai. Also…
ytc_UgzJGpRjj…
G
Ok. I think that the Self Driving car should be programmed not to get its self i…
ytc_Ugis_iWcr…
G
Anyone using Ai for art or design should not call themself an artist. They're ju…
ytc_Ugzue64WC…
G
The jim crow laws. Were aimed at black people.And the fact that slavery was lega…
ytc_UgxUS367F…
G
Great video! It's nice to know my gut feelings have been correct for the last fe…
ytc_Ugw8_800I…
Comment
Let's talk about judgement. Humans at the current moment are exercising poor judgment by investing so much money to achieve AGI and ASI. There's the sustainability problem, but there's also the problem of money being diverted away from helping people live their lives. Money and effort diverted away from climate change in order to pump much more CO2 into the atmosphere than we otherwise would by scaling these models. The thing is, humans already understand climate change and can take more action today. Humans understand how to lift people out of poverty today.
We don't need AI, but rather humans need to keep improving how we work together and solve real problems. And we can! Our brains all put together represent an enormous amount of computing power!
youtube
AI Responsibility
2025-11-24T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx4zERVmfjp4IRXO1F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwh9k_HZ-exWFgO6EF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyq9BHmpcIHl4s22Wx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZ3E70T3uP5TKzVNh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyu6WU74T3c-v-wJl54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwRTQd7p4_OVO9i-Jp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgztLknNz9eP-a6m0mR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzOFlv8r72Km5t2CBB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz160cl2gbUuyBGD494AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyNsAIfSadB9By9omN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]