Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Holy crap, the person who actually thinks themselves a God just for using AI ar…
ytc_UgzDvi7mS…
G
You raise an important point about the need for ethical frameworks in AI develop…
ytr_UgwMeQYYs…
G
Lost in Space Will Robinson. Seven of Nine from the Borg. Who will be the Jesus …
ytr_UgzUHQb3S…
G
They should have to watermark these as AI generated . As to not spread fear and …
ytr_Ugx3Zn2Wg…
G
We appreciate your concerns about the impact of AI and robots on the future job …
ytr_UgxrQGo9V…
G
We should put AI, humanoid robots, and profit before people and the planet. Tha…
ytc_UgyBwJTKH…
G
This is where leadership becomes critical. AI will continue to evolve, but the a…
ytc_Ugzxg-rL2…
G
Right now Gemini 3 pro is the only unlimited and smarter model than chatgpt 5…
ytc_UgwyQbWgY…
Comment
If a telescope found an alien space ship heading for earth that would be here in 6 years, we would be doing all manner of things to prepare for any possible result.
Yet, that is exactly what AI is. An alien intelligence that CAN turn against humans. Almost ALL of the current AI, when tested with certain questions, responded that humans should be eliminated!
That is a FACT.
Yet, the only involved human pushing for regulation and controls for AI is Elon Musk and he is practically insane and puts very little falue on humans.
So even though we KNOW, for certain that AI is inherrently extremely dangerous, we are balls to the wall rushing to make it as smart as possible, as SOON as possible.
youtube
Viral AI Reaction
2025-11-04T20:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw_qEfpYNHNC2LvdXx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz7uxu1x01m5TbqRR94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwieGfoxnGHMqLeX5V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVKgaw08ilJNManJl4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw96zZ1FL9v2xFoBhl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugxk4Zzij8oLfBnKCvp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzntRIAu6xJKorBjR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzF_p2UsQdiZyrj33l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzTrlXwr_bpUl6YMIV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzDBW-GKwoe0GrVDdx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]