Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@CorporateCatalystno it won't only html and css and basics will be wrotten by i…
ytr_UgzM93K-e…
G
@NeoKree so you spend hours upon hours just yakking at a chatbot and think your…
ytr_UgxoWpFL5…
G
Once i was getting so frustrated at chatgpt and was talking rudely to it (lol) b…
ytc_UgwuDEZPf…
G
It sounds like you're referencing the classic themes of AI and robotics! In our …
ytr_UgzdJ0tH1…
G
Yall read bible ull know outcomes anyway yall don't get Ai in yo body cz bible s…
ytc_UgxXqish-…
G
You know what truly is ableist, classist and racist? Taking potential income/pro…
ytc_UgzVXKZpl…
G
If somone writes a book with my experiences using chatgpt how tf would i know? B…
ytc_Ugw1MT1Rs…
G
not all ai is created equally. i feel liek ai quality is a spectrum, starting fr…
ytr_UgzmccjHa…
Comment
No form of AI can just "decide" to do something bad, no matter the situation. If it resorts to blackmailing in the case of the test, then it is poorly trained, as during training it was rewarded primarily for completing the task (doing what you ask it to do), rather than following a set of rules it should have. This, through pure math, resulted in the optimization algorithm to push the neural network into always completing the task, regardless of whether its ethical or not.
The only danger is us failing to establish proper rewarding systems while training and that will definitely not lead to some kind of apocalypse.
youtube
AI Harm Incident
2025-08-26T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwybx8F8vTnHtTioU14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuDFWlUDw3hYc99Px4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugw2-ygI4y1XIdHHxzZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQ0g0_ZB0-FISEAsp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxCJ9wpLFxRz4N3gaZ4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyB18ePXJI8-FbZUGB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwyKFPGKN68JBZSX0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyB5Ay8gFxOJei42894AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyPrhN5PobZqZSUtS94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxkJV7B2PHx7RHNYVF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}
]