Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, why should machines and people have the same level of importance? Simply p…
ytc_UgxZ7h-lx…
G
That comment at the beginning where they say "Commissions can only paid by rich …
ytc_UgzfifCKQ…
G
„That’s how humans learn too”
No. That is false. Completely and utterly false. …
ytr_Ugz80jAWL…
G
No it’s already hacked ai have capabilities to destroy everything ppl need to pa…
ytc_UgzrAiyqF…
G
So, the snarl that I see is that if unemployment actually /were/ at 25%:
(1) I w…
ytc_UgyHWe37O…
G
Ppl are putting wayyy too much faith and trust in AI. I dont think its a good id…
ytc_UgxO9HZ_G…
G
This guy is ALWAYS so pessimistic is all. AI can be looked at through any lense.…
ytc_Ugw13aC2k…
G
Look, we've all seen videos of these savant kids who are busting out photorealis…
ytc_UgxyOLNy9…
Comment
I think AI on its own is does not pose danger to human beings. We should not impose our human motives onto machines. After all there are machines and the way we did not kill all life because they waste resources same goes for AI that can easily transport itself to any planet. On the other hand, a mistake in the programming or evil use of AI could be the most lethal weapon we will ever face like ever. Something that can calculate all your moves and all your Plan B's and C's all the way to Z's is the most effective efficient weapon with an insane mortality rate.
youtube
2018-04-07T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyMsw_5sghugRgZzM14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzEBedvKqcFUkb1b894AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwGsrZA0IgP6Jft_094AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwS27Cr1Mx5vHWqzM54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyAoRXYxzT8z4j8qbt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0XiMOC3M8BPV2Gb14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWWGEI1-_J0amIte54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQcBfCdpeceoxdRZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwovzul0dbt9NL-bSV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwgrJW8y9MaeT3wjKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]