Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One additional problem with AI is that their deep-learning algorithms learn real…
ytc_UgwQjqfkw…
G
U even look like Joaquin Pheonix from the movie HER... even your AI girlfriend s…
ytc_UgySEKL9O…
G
Andrew Ng offers a great and positive view on the application of AI. Many of the…
ytc_UgwbiTRbw…
G
Where are the sora and AI videos of fake white people doing this same shit?
…
ytc_UgyjnPFad…
G
We ALL know DARPA is the birthplace of AI. We ALL know DARPA is 20-30 years ahea…
ytc_Ugzuo7fTj…
G
Peoples become RETARDS - let think a little - YOU are beutiful, healthy, joyful,…
ytc_UgyXrkQ-v…
G
I told everyone to be worried the moment they had those damn AI apps to make wan…
ytc_Ugz1Bd_AP…
G
Wait. If they are basing their drawings on ai generation... they are using ai fo…
ytc_UgwMNEVPS…
Comment
The real danger of AI is more like That Matrix than Terminator.
Consider "virtual reality" that is so advanced and so realistic that you can't tell the difference between the program and reality. Then multiply that by 10. Then add in Neuralink.
We will only be able to "regulate" AI for so long before a rogue element gets the upper hand, and runs away with the whole thing.
In a generation or two, nobody will even know it happened.
youtube
AI Governance
2023-04-20T09:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwsXvK-ztAmpiFfZFJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx9lJghmJP4Enqfgqh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZVXeR8DDrFK532fp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwhkpmOWfL4F-r7ad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzwcHpptZmd4BIUeZd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtjT-X2c0X6pCIR3B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJ0o-NsErRNVYgckJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNRyKLx9F8TihtHId4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUITQPzu43tJE84d14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKbTqoZKrAdRfvEjp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]