Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai, deep fake, virtual reality, Demon, Anti-christ, all of these are true signs…
ytc_UgwvWIFZ-…
G
We already have, and have had for the most of human history, such manipulation a…
ytc_Ugw0eean9…
G
Not long ago people were laughing at the EU's AI Act, this is one of the reasons…
rdc_melacr5
G
AI will take every job and will take care about energy supply to run AI. Yeah…..…
ytc_UgxqePemd…
G
Driverless cars are flawed and cause more accidents, and these people are saying…
ytc_Ugx-ywElv…
G
Yup, we’ve crossed the threshold into that for some time now. It’s basically gon…
rdc_mjx6v2m
G
is it bad that i could've been convinced this was real if it was only one robot…
ytc_UgyBPYjcl…
G
The car should be automated as it seems to be getting safer than human driving (…
ytc_UgiwJ2F2f…
Comment
Thanks for helping focusing on this important topic.
I also think SAGI is the end for humans unless we become transhumans (and even doing so, we wouldn't be able to compete with robots with SAGI).
I feel that a possible solution is to remove all multimodal LLM models (AGI) from the public and enforce using multiple "task oriented SLMs", governed by humans (multiple AI agents coordinated by humans), with safety gates at every agent connection could be a possible option.
And of course stop chasing SAGI models, those investigations should be banned as researching on developing biological weapons
youtube
2025-01-10T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyU7l9A_1muHLAQMdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUnazbFeuWL0pIXZN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1duy4r3L69ffEZAd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPOvDlR8RspM7dGO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7u3qZGFad9b45Xtd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCxT1gu-yU0LfxDXl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwixxa_D5dPM2diiKd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwWUgc286ZJGAchseZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuIljFsKBeJpbWNeR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzrd39tEyjjXSkYrPl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]