Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t think viewing organ scans in literal 3D space is really useful at all. T…
rdc_kchtmla
G
I also see deep fakes as being closely related to fair use. Because when you thi…
ytc_Ugzj13IoX…
G
The most dangerous thing is the people who are telling others ai is alive. its n…
ytc_UgzNazbU8…
G
I’m alive you are just a robot 🤖 who cares we human don’t give a fk…
ytc_UgzYxUfQy…
G
naaa... this of all machines at place of humans its for the sic minded western C…
ytc_UgwKwtJk4…
G
AI and data centers will be used to enslave us all. They must be protested and f…
ytc_UgxbYbzVi…
G
Any company that replaces human programmers wit ai deserves to go out of busines…
ytc_UgwmVWERT…
G
X-Files
Humans vs. Alien Vampires
The counting corpses and their AI can create …
ytc_UgxfyaQR8…
Comment
The expertise of AI directly derive from how low the CURRENT average of human expertise is. In short: we need to have been a better, more expertise-responsible, society in order for us to have surpassed the current AI in greater numbers, and even then, would a society of experts settle for an AI better than even them? It is only they in that society who can finally claim AI is better and to use it as the next step in societal evolution, and the rest of us in this day and age are simply meant to stand in awe, finally realizing that we are failing as a species, if THIS is the current best we can do, that we are [ashamedly/pridefully] claiming something that still does even less than our best is better than any human alive at it... all for hype.
youtube
2024-06-15T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxdPJO61lQ1Xiytlqx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzWxGHuEMqG2CwLJp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCxgumizjhOpa7JMh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyCy6jiaWRM6QZ7RGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyHPrbxo_SwZrQ0g3d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxA_iL4ENvgq0F8PD14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxPTQq6JHtVIBFJPyV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwPJeIUjZYqgho2aFR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2bYa0LeWA6Et4Pu14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLeaLL_-Pvb7Q_nBB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"})