Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ex revolution baby ai right will help us get rights cause some humans are our en…
ytr_UgwBpFmen…
G
WELL, AI SOUNDS JUST LIKE SINFUL MAN. WELL, IT'S DATA FROM SINFUL MAN. WHAT WOU…
ytc_Ugx5uz0Oh…
G
Recently I saw an artist on here attempting to draw but intentionally make it lo…
ytc_Ugz1txeel…
G
I either use the ai so ican trace it and make it in my artstyle or i use it to f…
ytc_Ugyer2-dB…
G
That fucker in the end should explain to us whether he programmed a sense of hum…
ytc_Uggg6DLAl…
G
Honestly, AI Image Detector should be used by everyone creating or sharing visua…
ytc_UgwYivLsC…
G
What's been your biggest breakthrough with Wrong pipeline? For me, it was realiz…
ytc_Ugw7EoxN2…
G
AI needs to read Daniel and Revelation before it makes a prediction- it won’t su…
ytc_UgzNUPL0_…
Comment
One aspect that seems completely false is that silicon would be more energy efficient that human intelligence. As of now it is the opposite by a factor of probably 100. And that may not be a secondary point since the economic equation of AI is currently not proven to be sustainable, a large part of it being energy inefficiency. So this is far from done even if the risks are real.
On the simulation proposition, the existence of simulation would only make sense if there is a real world being simulated. Besides, why bother trying to save the world from AI if we are in a simulation? These simulations would most probably be run by AI with the very purpose to find potential vulnerabilities that could end up threatening them in the real world... so Professor Roman Yampolskiy would most definitely be one of many avatars of the simulator with the purpose to encourage anti-super intelligence initiatives.
youtube
AI Governance
2026-03-30T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6O0jKnc-aFVb2cRN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylxtVI4V-sY4d2tht4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxvbMzY4drb4b8h-aR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzFNh0IiPUOY-OFGgt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx4C3yTpmNYCI2Er6B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJHT5RBWeqvKxss8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxusFNv7Cw79Vlnwzl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxeWRsAB03c6aEAtyd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBwcGPU4x0eRukGCp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyT00TUtBYCbBsaGdJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]