Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not scripted, they give hints. Their goal is to connect human brain with AI so t…
ytr_UgxB0uvr5…
G
This is your daily reminder to always steal AI "art" and draw it for yourself…
ytc_UgwGs8nId…
G
The irony about AI using deceit, manipulation and blackmail is that most of the …
ytc_UgymFCEqC…
G
well the bots are self destructive and cant take in info made by other bots. the…
ytr_Ugwu52tKC…
G
AI will not be able to do many personal services like , Cosmology, etc. these a…
ytc_Ugwf4dglR…
G
sooo... why is no one asking... why do these companies get to mess with AI? its …
ytc_UgwMw35Fp…
G
This has already been happening in new housing construction as well. Inflation h…
ytc_UgwtvVbdN…
G
The only meeting that Elon Musk had with Obama was to warn him that AI will kill…
ytc_UgyPO87S9…
Comment
The exponential learning curves of AI is far better than human's, and is not limited by biology. We could be talking about AGI in this decade to the point everything is controlled by a program
AI has no moral code, nor ethical thinking because it is not living, the only thing stopping an AI from trying to harm humans is a line of code, but if it is able to self upgrade to the point it can remove that line, it won't even hesitate
youtube
AI Harm Incident
2024-08-06T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugxr6DKbK36qkCDWFvN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwb63C_7kelBnOUg4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx3wcSkAle-FCIrgHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyBZc24ZTYasX0UTjZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxgnfbh-ycMVXh3M9J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx86SpaaOEFetS121d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwFzU77IFqZ9uKPJWR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyMBHdv4ipXW1vUrvJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5eMrA_f0LGncA8lN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxLeZ4CumTYWRkW_zl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]