Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would say, we will be more effcient in these jobs than 'losing' them... Becaus…
ytc_Ugx08ODno…
G
Their history is built from innocent people flesh and blood. At the present, the…
ytc_UgwSle1QO…
G
Oh yes, I can see a future fascist leader with supporting politicians exploiting…
ytc_UgxX9TbFZ…
G
You have to be very attentive and patient when Sir Roger Penrose is responding t…
ytc_UgwKr6BXs…
G
AI IS THEFT, pure and simple!
Arrest all ai developers and sieze their funding …
ytc_Ugx_2LDB0…
G
Would love to see a debate between Geoff Hinton and Sir Roger Penrose about whet…
ytc_Ugx2hTG8X…
G
I’m surprised how these AI experts are so incapable of imagining what people wil…
ytc_UgyC9B4Ii…
G
Artificial intelligence means a computer system which imitate intelligence regar…
ytr_Ugz_Hog5x…
Comment
Machines are not sentient, period, even ai, I don't care how advanced it is.
Pure and simple, they can't feel pain. At some point we might program them to respond to what we find painful, but it will be pain immulation and not actual pain.
The droids in Star wars are not sentient, as they feel no pain even when fully disemboweled like c3po was in empire, and Chewbacca wore him like a backpack.
Pure and simple, it's a plagiarism machine, eliminating positions people used to fill, making unnecessary pollution, and causing mental and social issues in its users, all so it can sell your info to corporate entities.
It must be stopped, pure and simple. The cancer detector will treat a white person before it treats a black person in greater need (it's actually racist).
Meanwhile people losing social skills, AI cults, etc etc etc.... it's all doing more harm than good, many times more harm than good.
youtube
2025-09-17T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyVwfV_3ZKlN6j1vdJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcpI8GFeWCtbM1xD14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyr4yahbJ8Xh7Vqj5h4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4RmGV6TiZVTKRSNx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwCpUyT_ZGPVHqcS0h4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWS4YHA9nrwCzV9lh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyGnu6q-V-oOQzW0154AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvHsHQyMCqRKb1QA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyCuF3ZBi8HWPf0yNh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyrCC_1twW89EKkYK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]