Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As long as they don't have any stupid "safety" shit I'm cool. We should keep dev…
ytc_Ugyud0ryN…
G
Not afraid if maker has make AI because at part other, this also very good for b…
ytc_UgwVm5Cx_…
G
If AI gets smarter than us, and finds it has no use for us, it'll just leave.
…
ytc_UgxGDrGme…
G
Imagine AI as an actual person and you as his friend. Now, he tells you what to …
ytc_UgxhPoTs_…
G
Interestingly that was my first conversation with AI but with a total opposite p…
ytc_UgxRlh-oG…
G
Laws preventing AI to develop in certain ways, should have been discussed by gov…
ytc_UgzTpc7V_…
G
😍😍😍 AI regulation ...yess 💞💞 I'm optimistic about AI bringing positive things an…
ytc_Ugzkmclzx…
G
So we will a moral AI to combat they AI of Governments, Corporations and Organis…
ytc_UgwJAZn1e…
Comment
Musk, his abomination creations and others that create abominations, such as A.I. etc, will all be washed away by a bigger power soon.
Then things will return back to the way they should be, without the likes of Musk, Bezos, Gates, etc, because it is those people who you look upto, that should not be looked upto. Do they help you?. NOPE, so do not help them, it is quite simple how to control them.
youtube
AI Harm Incident
2024-01-02T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz-9uj85q45g8pz7dN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwZHQmChnBMMryxj3F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzXtMNnuwsq1HJINfp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxINDYGPlH_WVW8zgl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzz8enepZVx4ceD9jx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYQS_OKgFAFsc9rlx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgycmqxPGMHWbsq3kSV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwiX4WSZwCA9z8ShRl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzyb3nGB78P2ulWr2R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx1nR4BuMcxVOlRsUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]