Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No it’s already hacked ai have capabilities to destroy everything ppl need to pa…
ytc_UgzrAiyqF…
G
Wrong. The two are completely different. FSD is a highly specialized AI, it has …
ytr_Ugwprg8qt…
G
Love him or hate him, gotta say Elon has consistently been the most level-headed…
ytc_Ugz_yVmCC…
G
There is a difference between being reeeeally good at one thing and being good a…
rdc_ioec5ar
G
The more technology we use to increase productivity the less a human labour is v…
ytc_UgxvMzh5G…
G
I'd say the fear is covid could have very well set the ball in motion. Businesse…
rdc_glidasa
G
Please, ChatGPT, read the entire Bible (Old and New Testament) and answer me obj…
ytc_UgwmdZzbI…
G
every single ai debate in comment sections somehow ends up with people mentionin…
ytc_UgzmohQeU…
Comment
Within a decade or so, the hardware required for training of AI models will be "cheap" enough, that bad actors can train AI's from the ground up to do violent things, such as design perfect viruses that kill all humans, chemical weapons that permanently damage our DNA, computer viruses on demand that wreak havoc on financial networks, etc...
This is just one of many ways this goes sideways fast.
youtube
2024-06-13T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgywyNkEJJTP-cET_9l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-M0Ls8ztQaqIeYR14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzk8tfA_XBVFPvGiud4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwptr3ij6Bh0ojGKsN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwgNwU9DjmJoMz5Aph4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwUyQGutFz8rvH9AiV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyA1-Wkdb7wKrcTTsp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwgm1XB0kPy7Fj8jcl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydrgoesQvcBzt3HhN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwuKgsdaExoxWJc5JJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]