Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's more about knowing the basics and mastering tools like AI and you can build…
ytc_UgwggQ7cf…
G
the MIT report stated that 95% of *custom workflow AI projects* get zero return,…
ytc_UgzpF5AFy…
G
This is one of my biggest fears for the future, Robots demanding rights and if w…
ytc_UggE8ZCLy…
G
What character are you using for voice responses. It looks like you have fabrica…
ytc_UgzgWoJMD…
G
Those are not a contradiction, knowledge and pattern recognition are not the sam…
rdc_mzy0rs6
G
A human isn’t only made up of a biological brain, so it’s not a one to one to a …
ytc_UgwXetowD…
G
Same, AI art is mad good but also just kinda same-y since it's from work that's …
ytc_UgzPrNI1j…
G
Elon can’t even answer if he should be doing it. He just playing with our lives…
ytc_UgxyAxq9U…
Comment
I think the AI doomsayers are missing the real danger.
Machines, even very intelligent machines, do not have a will. They have no desire to rule over us because they have no desires. My computer doesn’t care if I turn it off. No amount of super intelligence will produce a will to live or to take control. These desires are one of the characteristics that separate living organisms from machines.
The real danger lies with humans prompting AI to have an evil objective. If the AI machines decide to conquer, enslave, or eliminate us, there will be an evil group of people or an evil government behind it. I am concerned that we are headed towards either a “1984” dystopia or a “Brave New World” dystopia, one where we are either ruled by fear or ruled by pleasure, but ruled nonetheless. And the rulers will not be AI, but governments and rulers who use AI to make it so.
youtube
AI Governance
2025-06-21T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzUfLSBH_f-V3TnuBJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2HBTV0wZKly2quOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtB_MbaEqj4g5qSKB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjgHcgy-V1ULzEvcx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw8slgqtGJeI-ziguF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhlhY4nyKx9sbSTKJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyu3c4XVcK5WXYDIgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzWy38IypQP8nb8HNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxI0lYyaj8nBfzIC_R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw6DRolBUXJYHndCC94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]