Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why are you assuming its good? What proof do we have its good or bad. Literall…
rdc_hn14ii0
G
I’d like to know, if AI was our world leader, what would its objective be and wo…
ytc_UgxICU3dq…
G
@SweeperKeeperShorts
It's going to take jobs
Do you not understand that?
Nobody…
ytr_UgxdacoYT…
G
I always compare AI art to microwave food. Everybody has a microwave at home and…
ytc_Ugy8VXNE7…
G
Artists copy each other's work all the time to train themselves... An AI is only…
ytc_UgyLw62iJ…
G
I feel theres nothing wrong with using Ai art for creative means, such as art fo…
ytc_Ugy-ooQu8…
G
We need AI that we can control, and which can help us create heaven on earth! No…
ytc_UgwZM902e…
G
@RichardBaran Yes and not just that Elon has been falsely claiming that they'd h…
ytr_UgyU_4-e2…
Comment
"Here's what happens when people tend to get mislead by that idea:
- Some websites and influencers incorrectly suggest...
- Bromide used to be in medicines...
- Bromide is available online..."
All sounds like a big hindsight realisations/defence claim from the ChatGPT model after acknowledging that it may have assisted in the demise of a man seeking health advice.
Them not owning up to this mistake is going to be the reason this happens time and time again with different experimental ideas because the users of their product aren't made aware of the true dangers of AI if believed word-for-word.
youtube
AI Harm Incident
2025-11-26T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxL1wPpxnFh4A_wKoN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzoTwDjlRZkt60euMl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz5GycpaaR9b1fKcNJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiZ2iBj_awRMB-XSd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw90k1sA4VcITDq-dJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwrmsOItWCgKTO6b3p4AaABAg","responsibility":"parent","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzxZGgs6zEnK5UHQvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyu5nO9VkBTPvSWBuV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyEF5Df4yg-M8fLojJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfoINCuwy8Z3EtX6J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}]